Oct 10 09:04:58 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 10 09:04:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 10 09:04:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:58 localhost kernel: BIOS-provided physical RAM map:
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 10 09:04:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 10 09:04:58 localhost kernel: NX (Execute Disable) protection: active
Oct 10 09:04:58 localhost kernel: APIC: Static calls initialized
Oct 10 09:04:58 localhost kernel: SMBIOS 2.8 present.
Oct 10 09:04:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 10 09:04:58 localhost kernel: Hypervisor detected: KVM
Oct 10 09:04:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 10 09:04:58 localhost kernel: kvm-clock: using sched offset of 4773075198 cycles
Oct 10 09:04:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 10 09:04:58 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 10 09:04:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 10 09:04:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 10 09:04:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 10 09:04:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 10 09:04:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 10 09:04:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 10 09:04:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 10 09:04:58 localhost kernel: Using GB pages for direct mapping
Oct 10 09:04:58 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 10 09:04:58 localhost kernel: ACPI: Early table checksum verification disabled
Oct 10 09:04:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 10 09:04:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 10 09:04:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 10 09:04:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 10 09:04:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 10 09:04:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 10 09:04:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 10 09:04:58 localhost kernel: No NUMA configuration found
Oct 10 09:04:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 10 09:04:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 10 09:04:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 10 09:04:58 localhost kernel: Zone ranges:
Oct 10 09:04:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 10 09:04:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 10 09:04:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 09:04:58 localhost kernel:   Device   empty
Oct 10 09:04:58 localhost kernel: Movable zone start for each node
Oct 10 09:04:58 localhost kernel: Early memory node ranges
Oct 10 09:04:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 10 09:04:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 10 09:04:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 09:04:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 10 09:04:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 10 09:04:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 10 09:04:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 10 09:04:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 10 09:04:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 10 09:04:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 10 09:04:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 10 09:04:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 10 09:04:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 10 09:04:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 10 09:04:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 10 09:04:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 10 09:04:58 localhost kernel: TSC deadline timer available
Oct 10 09:04:58 localhost kernel: CPU topo: Max. logical packages:   8
Oct 10 09:04:58 localhost kernel: CPU topo: Max. logical dies:       8
Oct 10 09:04:58 localhost kernel: CPU topo: Max. dies per package:   1
Oct 10 09:04:58 localhost kernel: CPU topo: Max. threads per core:   1
Oct 10 09:04:58 localhost kernel: CPU topo: Num. cores per package:     1
Oct 10 09:04:58 localhost kernel: CPU topo: Num. threads per package:   1
Oct 10 09:04:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 10 09:04:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 10 09:04:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 10 09:04:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 10 09:04:58 localhost kernel: Booting paravirtualized kernel on KVM
Oct 10 09:04:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 10 09:04:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 10 09:04:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 10 09:04:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 10 09:04:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 10 09:04:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 10 09:04:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 10 09:04:58 localhost kernel: random: crng init done
Oct 10 09:04:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 10 09:04:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 10 09:04:58 localhost kernel: Fallback order for Node 0: 0 
Oct 10 09:04:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 10 09:04:58 localhost kernel: Policy zone: Normal
Oct 10 09:04:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 10 09:04:58 localhost kernel: software IO TLB: area num 8.
Oct 10 09:04:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 10 09:04:58 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 10 09:04:58 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 10 09:04:58 localhost kernel: Dynamic Preempt: voluntary
Oct 10 09:04:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 10 09:04:58 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 10 09:04:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 10 09:04:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 10 09:04:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 10 09:04:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 10 09:04:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 10 09:04:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 10 09:04:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 10 09:04:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 10 09:04:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 10 09:04:58 localhost kernel: Console: colour VGA+ 80x25
Oct 10 09:04:58 localhost kernel: printk: console [ttyS0] enabled
Oct 10 09:04:58 localhost kernel: ACPI: Core revision 20230331
Oct 10 09:04:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 10 09:04:58 localhost kernel: x2apic enabled
Oct 10 09:04:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 10 09:04:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 10 09:04:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 10 09:04:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 10 09:04:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 10 09:04:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 10 09:04:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 10 09:04:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 10 09:04:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 10 09:04:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 10 09:04:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 10 09:04:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 10 09:04:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 10 09:04:58 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 10 09:04:58 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 10 09:04:58 localhost kernel: x86/bugs: return thunk changed
Oct 10 09:04:58 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 10 09:04:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 10 09:04:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 10 09:04:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 10 09:04:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 10 09:04:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 10 09:04:58 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 10 09:04:58 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 10 09:04:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 10 09:04:58 localhost kernel: landlock: Up and running.
Oct 10 09:04:58 localhost kernel: Yama: becoming mindful.
Oct 10 09:04:58 localhost kernel: SELinux:  Initializing.
Oct 10 09:04:58 localhost kernel: LSM support for eBPF active
Oct 10 09:04:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 09:04:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 09:04:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 10 09:04:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 10 09:04:58 localhost kernel: ... version:                0
Oct 10 09:04:58 localhost kernel: ... bit width:              48
Oct 10 09:04:58 localhost kernel: ... generic registers:      6
Oct 10 09:04:58 localhost kernel: ... value mask:             0000ffffffffffff
Oct 10 09:04:58 localhost kernel: ... max period:             00007fffffffffff
Oct 10 09:04:58 localhost kernel: ... fixed-purpose events:   0
Oct 10 09:04:58 localhost kernel: ... event mask:             000000000000003f
Oct 10 09:04:58 localhost kernel: signal: max sigframe size: 1776
Oct 10 09:04:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 10 09:04:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 10 09:04:58 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 10 09:04:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 10 09:04:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 10 09:04:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 10 09:04:58 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 10 09:04:58 localhost kernel: node 0 deferred pages initialised in 8ms
Oct 10 09:04:58 localhost kernel: Memory: 7765552K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616212K reserved, 0K cma-reserved)
Oct 10 09:04:58 localhost kernel: devtmpfs: initialized
Oct 10 09:04:58 localhost kernel: x86/mm: Memory block size: 128MB
Oct 10 09:04:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 10 09:04:58 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 10 09:04:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 10 09:04:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 10 09:04:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 10 09:04:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 10 09:04:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 10 09:04:58 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 10 09:04:58 localhost kernel: audit: type=2000 audit(1760087096.389:1): state=initialized audit_enabled=0 res=1
Oct 10 09:04:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 10 09:04:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 10 09:04:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 10 09:04:58 localhost kernel: cpuidle: using governor menu
Oct 10 09:04:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 10 09:04:58 localhost kernel: PCI: Using configuration type 1 for base access
Oct 10 09:04:58 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 10 09:04:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 10 09:04:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 10 09:04:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 10 09:04:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 10 09:04:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 10 09:04:58 localhost kernel: Demotion targets for Node 0: null
Oct 10 09:04:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 10 09:04:58 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 10 09:04:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 10 09:04:58 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 10 09:04:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 10 09:04:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 10 09:04:58 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 10 09:04:58 localhost kernel: ACPI: Interpreter enabled
Oct 10 09:04:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 10 09:04:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 10 09:04:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 10 09:04:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 10 09:04:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 10 09:04:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 10 09:04:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [3] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [4] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [5] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [6] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [7] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [8] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [9] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [10] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [11] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [12] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [13] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [14] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [15] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [16] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [17] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [18] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [19] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [20] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [21] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [22] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [23] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [24] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [25] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [26] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [27] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [28] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [29] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [30] registered
Oct 10 09:04:58 localhost kernel: acpiphp: Slot [31] registered
Oct 10 09:04:58 localhost kernel: PCI host bridge to bus 0000:00
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 10 09:04:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 10 09:04:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 10 09:04:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 09:04:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 10 09:04:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 10 09:04:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 10 09:04:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 10 09:04:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 10 09:04:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 10 09:04:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 10 09:04:58 localhost kernel: iommu: Default domain type: Translated
Oct 10 09:04:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 10 09:04:58 localhost kernel: SCSI subsystem initialized
Oct 10 09:04:58 localhost kernel: ACPI: bus type USB registered
Oct 10 09:04:58 localhost kernel: usbcore: registered new interface driver usbfs
Oct 10 09:04:58 localhost kernel: usbcore: registered new interface driver hub
Oct 10 09:04:58 localhost kernel: usbcore: registered new device driver usb
Oct 10 09:04:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 10 09:04:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 10 09:04:58 localhost kernel: PTP clock support registered
Oct 10 09:04:58 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 10 09:04:58 localhost kernel: NetLabel: Initializing
Oct 10 09:04:58 localhost kernel: NetLabel:  domain hash size = 128
Oct 10 09:04:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 10 09:04:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 10 09:04:58 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 10 09:04:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 10 09:04:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 10 09:04:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 10 09:04:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 10 09:04:58 localhost kernel: vgaarb: loaded
Oct 10 09:04:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 10 09:04:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 10 09:04:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 10 09:04:58 localhost kernel: pnp: PnP ACPI init
Oct 10 09:04:58 localhost kernel: pnp 00:03: [dma 2]
Oct 10 09:04:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 10 09:04:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 10 09:04:58 localhost kernel: NET: Registered PF_INET protocol family
Oct 10 09:04:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 10 09:04:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 10 09:04:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 10 09:04:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 10 09:04:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 10 09:04:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 10 09:04:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 10 09:04:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 09:04:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 09:04:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 10 09:04:58 localhost kernel: NET: Registered PF_XDP protocol family
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 10 09:04:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 10 09:04:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 10 09:04:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 10 09:04:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 106489 usecs
Oct 10 09:04:58 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 10 09:04:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 10 09:04:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 10 09:04:58 localhost kernel: ACPI: bus type thunderbolt registered
Oct 10 09:04:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 10 09:04:58 localhost kernel: Initialise system trusted keyrings
Oct 10 09:04:58 localhost kernel: Key type blacklist registered
Oct 10 09:04:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 10 09:04:58 localhost kernel: zbud: loaded
Oct 10 09:04:58 localhost kernel: integrity: Platform Keyring initialized
Oct 10 09:04:58 localhost kernel: integrity: Machine keyring initialized
Oct 10 09:04:58 localhost kernel: Freeing initrd memory: 85808K
Oct 10 09:04:58 localhost kernel: NET: Registered PF_ALG protocol family
Oct 10 09:04:58 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 10 09:04:58 localhost kernel: Key type asymmetric registered
Oct 10 09:04:58 localhost kernel: Asymmetric key parser 'x509' registered
Oct 10 09:04:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 10 09:04:58 localhost kernel: io scheduler mq-deadline registered
Oct 10 09:04:58 localhost kernel: io scheduler kyber registered
Oct 10 09:04:58 localhost kernel: io scheduler bfq registered
Oct 10 09:04:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 10 09:04:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 10 09:04:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 10 09:04:58 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 10 09:04:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 10 09:04:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 10 09:04:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 10 09:04:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 10 09:04:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 10 09:04:58 localhost kernel: Non-volatile memory driver v1.3
Oct 10 09:04:58 localhost kernel: rdac: device handler registered
Oct 10 09:04:58 localhost kernel: hp_sw: device handler registered
Oct 10 09:04:58 localhost kernel: emc: device handler registered
Oct 10 09:04:58 localhost kernel: alua: device handler registered
Oct 10 09:04:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 10 09:04:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 10 09:04:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 10 09:04:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 10 09:04:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 10 09:04:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 10 09:04:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 10 09:04:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 10 09:04:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 10 09:04:58 localhost kernel: hub 1-0:1.0: USB hub found
Oct 10 09:04:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 10 09:04:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 10 09:04:58 localhost kernel: usbserial: USB Serial support registered for generic
Oct 10 09:04:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 10 09:04:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 10 09:04:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 10 09:04:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 10 09:04:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 10 09:04:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 10 09:04:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 10 09:04:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 10 09:04:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-10T09:04:57 UTC (1760087097)
Oct 10 09:04:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 10 09:04:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 10 09:04:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 10 09:04:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 10 09:04:58 localhost kernel: usbcore: registered new interface driver usbhid
Oct 10 09:04:58 localhost kernel: usbhid: USB HID core driver
Oct 10 09:04:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 10 09:04:58 localhost kernel: Initializing XFRM netlink socket
Oct 10 09:04:58 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 10 09:04:58 localhost kernel: Segment Routing with IPv6
Oct 10 09:04:58 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 10 09:04:58 localhost kernel: mpls_gso: MPLS GSO support
Oct 10 09:04:58 localhost kernel: IPI shorthand broadcast: enabled
Oct 10 09:04:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 10 09:04:58 localhost kernel: AES CTR mode by8 optimization enabled
Oct 10 09:04:58 localhost kernel: sched_clock: Marking stable (1189035311, 148562257)->(1462308318, -124710750)
Oct 10 09:04:58 localhost kernel: registered taskstats version 1
Oct 10 09:04:58 localhost kernel: Loading compiled-in X.509 certificates
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 10 09:04:58 localhost kernel: Demotion targets for Node 0: null
Oct 10 09:04:58 localhost kernel: page_owner is disabled
Oct 10 09:04:58 localhost kernel: Key type .fscrypt registered
Oct 10 09:04:58 localhost kernel: Key type fscrypt-provisioning registered
Oct 10 09:04:58 localhost kernel: Key type big_key registered
Oct 10 09:04:58 localhost kernel: Key type encrypted registered
Oct 10 09:04:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 10 09:04:58 localhost kernel: Loading compiled-in module X.509 certificates
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 09:04:58 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 10 09:04:58 localhost kernel: ima: No architecture policies found
Oct 10 09:04:58 localhost kernel: evm: Initialising EVM extended attributes:
Oct 10 09:04:58 localhost kernel: evm: security.selinux
Oct 10 09:04:58 localhost kernel: evm: security.SMACK64 (disabled)
Oct 10 09:04:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 10 09:04:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 10 09:04:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 10 09:04:58 localhost kernel: evm: security.apparmor (disabled)
Oct 10 09:04:58 localhost kernel: evm: security.ima
Oct 10 09:04:58 localhost kernel: evm: security.capability
Oct 10 09:04:58 localhost kernel: evm: HMAC attrs: 0x1
Oct 10 09:04:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 10 09:04:58 localhost kernel: Running certificate verification RSA selftest
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 10 09:04:58 localhost kernel: Running certificate verification ECDSA selftest
Oct 10 09:04:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 10 09:04:58 localhost kernel: clk: Disabling unused clocks
Oct 10 09:04:58 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 10 09:04:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 10 09:04:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 10 09:04:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 10 09:04:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 10 09:04:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 10 09:04:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 10 09:04:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 10 09:04:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 10 09:04:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 10 09:04:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 10 09:04:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 10 09:04:58 localhost kernel: Run /init as init process
Oct 10 09:04:58 localhost kernel:   with arguments:
Oct 10 09:04:58 localhost kernel:     /init
Oct 10 09:04:58 localhost kernel:   with environment:
Oct 10 09:04:58 localhost kernel:     HOME=/
Oct 10 09:04:58 localhost kernel:     TERM=linux
Oct 10 09:04:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 10 09:04:58 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 09:04:58 localhost systemd[1]: Detected virtualization kvm.
Oct 10 09:04:58 localhost systemd[1]: Detected architecture x86-64.
Oct 10 09:04:58 localhost systemd[1]: Running in initrd.
Oct 10 09:04:58 localhost systemd[1]: No hostname configured, using default hostname.
Oct 10 09:04:58 localhost systemd[1]: Hostname set to <localhost>.
Oct 10 09:04:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 10 09:04:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 10 09:04:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 09:04:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 10 09:04:58 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 10 09:04:58 localhost systemd[1]: Reached target Local File Systems.
Oct 10 09:04:58 localhost systemd[1]: Reached target Path Units.
Oct 10 09:04:58 localhost systemd[1]: Reached target Slice Units.
Oct 10 09:04:58 localhost systemd[1]: Reached target Swaps.
Oct 10 09:04:58 localhost systemd[1]: Reached target Timer Units.
Oct 10 09:04:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 09:04:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 10 09:04:58 localhost systemd[1]: Listening on Journal Socket.
Oct 10 09:04:58 localhost systemd[1]: Listening on udev Control Socket.
Oct 10 09:04:58 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 10 09:04:58 localhost systemd[1]: Reached target Socket Units.
Oct 10 09:04:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 10 09:04:58 localhost systemd[1]: Starting Journal Service...
Oct 10 09:04:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 09:04:58 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:04:58 localhost systemd[1]: Starting Create System Users...
Oct 10 09:04:58 localhost systemd[1]: Starting Setup Virtual Console...
Oct 10 09:04:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 10 09:04:58 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:04:58 localhost systemd[1]: Finished Create System Users.
Oct 10 09:04:58 localhost systemd-journald[310]: Journal started
Oct 10 09:04:58 localhost systemd-journald[310]: Runtime Journal (/run/log/journal/4cd1b6a8cc364bf3aa73609f8f6b6f5b) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:04:58 localhost systemd-sysusers[315]: Creating group 'users' with GID 100.
Oct 10 09:04:58 localhost systemd-sysusers[315]: Creating group 'dbus' with GID 81.
Oct 10 09:04:58 localhost systemd-sysusers[315]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 10 09:04:58 localhost systemd[1]: Started Journal Service.
Oct 10 09:04:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 09:04:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 09:04:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 09:04:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 09:04:58 localhost systemd[1]: Finished Setup Virtual Console.
Oct 10 09:04:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 10 09:04:58 localhost systemd[1]: Starting dracut cmdline hook...
Oct 10 09:04:58 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct 10 09:04:58 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 10 09:04:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 10 09:04:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 10 09:04:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 10 09:04:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 10 09:04:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 10 09:04:58 localhost kernel: RPC: Registered udp transport module.
Oct 10 09:04:58 localhost kernel: RPC: Registered tcp transport module.
Oct 10 09:04:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 10 09:04:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 10 09:04:58 localhost rpc.statd[446]: Version 2.5.4 starting
Oct 10 09:04:58 localhost rpc.statd[446]: Initializing NSM state
Oct 10 09:04:58 localhost rpc.idmapd[451]: Setting log level to 0
Oct 10 09:04:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 10 09:04:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 09:04:58 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 09:04:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 09:04:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 10 09:04:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 10 09:04:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 10 09:04:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 10 09:04:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:04:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 10 09:04:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:04:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:04:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 09:04:58 localhost systemd[1]: Reached target Network.
Oct 10 09:04:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 09:04:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 10 09:04:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 10 09:04:58 localhost kernel: libata version 3.00 loaded.
Oct 10 09:04:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 10 09:04:58 localhost systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:04:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 10 09:04:58 localhost kernel:  vda: vda1
Oct 10 09:04:58 localhost kernel: scsi host0: ata_piix
Oct 10 09:04:58 localhost kernel: scsi host1: ata_piix
Oct 10 09:04:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 10 09:04:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 10 09:04:58 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 09:04:58 localhost systemd[1]: Reached target Initrd Root Device.
Oct 10 09:04:59 localhost kernel: ata1: found unknown device (class 0)
Oct 10 09:04:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 10 09:04:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 10 09:04:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 10 09:04:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 10 09:04:59 localhost systemd[1]: Reached target System Initialization.
Oct 10 09:04:59 localhost systemd[1]: Reached target Basic System.
Oct 10 09:04:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 10 09:04:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 10 09:04:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 10 09:04:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 10 09:04:59 localhost systemd[1]: Finished dracut initqueue hook.
Oct 10 09:04:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 09:04:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 10 09:04:59 localhost systemd[1]: Reached target Remote File Systems.
Oct 10 09:04:59 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 10 09:04:59 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 10 09:04:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 10 09:04:59 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct 10 09:04:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 09:04:59 localhost systemd[1]: Mounting /sysroot...
Oct 10 09:04:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 10 09:04:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 10 09:04:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 10 09:04:59 localhost systemd[1]: Mounted /sysroot.
Oct 10 09:04:59 localhost systemd[1]: Reached target Initrd Root File System.
Oct 10 09:04:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 10 09:04:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 10 09:04:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 10 09:04:59 localhost systemd[1]: Reached target Initrd File Systems.
Oct 10 09:04:59 localhost systemd[1]: Reached target Initrd Default Target.
Oct 10 09:04:59 localhost systemd[1]: Starting dracut mount hook...
Oct 10 09:04:59 localhost systemd[1]: Finished dracut mount hook.
Oct 10 09:04:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 10 09:04:59 localhost rpc.idmapd[451]: exiting on signal 15
Oct 10 09:04:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 10 09:04:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 10 09:05:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 10 09:05:00 localhost systemd[1]: Stopped target Network.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Timer Units.
Oct 10 09:05:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 10 09:05:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Basic System.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Path Units.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Remote File Systems.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Slice Units.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Socket Units.
Oct 10 09:05:00 localhost systemd[1]: Stopped target System Initialization.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Local File Systems.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Swaps.
Oct 10 09:05:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut mount hook.
Oct 10 09:05:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 10 09:05:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 10 09:05:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 10 09:05:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 10 09:05:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 10 09:05:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 10 09:05:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 10 09:05:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 10 09:05:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 10 09:05:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 10 09:05:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 10 09:05:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 10 09:05:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Closed udev Control Socket.
Oct 10 09:05:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Closed udev Kernel Socket.
Oct 10 09:05:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 10 09:05:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 10 09:05:00 localhost systemd[1]: Starting Cleanup udev Database...
Oct 10 09:05:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 10 09:05:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 10 09:05:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Stopped Create System Users.
Oct 10 09:05:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 10 09:05:00 localhost systemd[1]: Finished Cleanup udev Database.
Oct 10 09:05:00 localhost systemd[1]: Reached target Switch Root.
Oct 10 09:05:00 localhost systemd[1]: Starting Switch Root...
Oct 10 09:05:00 localhost systemd[1]: Switching root.
Oct 10 09:05:00 localhost systemd-journald[310]: Journal stopped
Oct 10 09:05:01 localhost systemd-journald[310]: Received SIGTERM from PID 1 (systemd).
Oct 10 09:05:01 localhost kernel: audit: type=1404 audit(1760087100.348:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability open_perms=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:05:01 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:05:01 localhost kernel: audit: type=1403 audit(1760087100.509:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 10 09:05:01 localhost systemd[1]: Successfully loaded SELinux policy in 164.473ms.
Oct 10 09:05:01 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.470ms.
Oct 10 09:05:01 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 09:05:01 localhost systemd[1]: Detected virtualization kvm.
Oct 10 09:05:01 localhost systemd[1]: Detected architecture x86-64.
Oct 10 09:05:01 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:05:01 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Stopped Switch Root.
Oct 10 09:05:01 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 10 09:05:01 localhost systemd[1]: Created slice Slice /system/getty.
Oct 10 09:05:01 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 10 09:05:01 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 10 09:05:01 localhost systemd[1]: Created slice User and Session Slice.
Oct 10 09:05:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 09:05:01 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 10 09:05:01 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 10 09:05:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 10 09:05:01 localhost systemd[1]: Stopped target Switch Root.
Oct 10 09:05:01 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 10 09:05:01 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 10 09:05:01 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 10 09:05:01 localhost systemd[1]: Reached target Path Units.
Oct 10 09:05:01 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 10 09:05:01 localhost systemd[1]: Reached target Slice Units.
Oct 10 09:05:01 localhost systemd[1]: Reached target Swaps.
Oct 10 09:05:01 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 10 09:05:01 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 10 09:05:01 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 10 09:05:01 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 10 09:05:01 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 10 09:05:01 localhost systemd[1]: Listening on udev Control Socket.
Oct 10 09:05:01 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 10 09:05:01 localhost systemd[1]: Mounting Huge Pages File System...
Oct 10 09:05:01 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 10 09:05:01 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 10 09:05:01 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 10 09:05:01 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 09:05:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 10 09:05:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:05:01 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 10 09:05:01 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 10 09:05:01 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 10 09:05:01 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 10 09:05:01 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 10 09:05:01 localhost systemd[1]: Stopped Journal Service.
Oct 10 09:05:01 localhost kernel: fuse: init (API version 7.37)
Oct 10 09:05:01 localhost systemd[1]: Starting Journal Service...
Oct 10 09:05:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 09:05:01 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 10 09:05:01 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:05:01 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 10 09:05:01 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 10 09:05:01 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:05:01 localhost systemd-journald[681]: Journal started
Oct 10 09:05:01 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:05:00 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 10 09:05:00 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 10 09:05:01 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 10 09:05:01 localhost kernel: ACPI: bus type drm_connector registered
Oct 10 09:05:01 localhost systemd[1]: Started Journal Service.
Oct 10 09:05:01 localhost systemd[1]: Mounted Huge Pages File System.
Oct 10 09:05:01 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 10 09:05:01 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 10 09:05:01 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 10 09:05:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 10 09:05:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:05:01 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 10 09:05:01 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 10 09:05:01 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 10 09:05:01 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 10 09:05:01 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 10 09:05:01 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 10 09:05:01 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 10 09:05:01 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:05:01 localhost systemd[1]: Mounting FUSE Control File System...
Oct 10 09:05:01 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 09:05:01 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 10 09:05:01 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 10 09:05:01 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 10 09:05:01 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 10 09:05:01 localhost systemd[1]: Starting Create System Users...
Oct 10 09:05:01 localhost systemd[1]: Mounted FUSE Control File System.
Oct 10 09:05:01 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:05:01 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 10 09:05:01 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 10 09:05:01 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 10 09:05:01 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 09:05:01 localhost systemd[1]: Finished Create System Users.
Oct 10 09:05:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 09:05:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 10 09:05:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 09:05:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 10 09:05:01 localhost systemd[1]: Reached target Local File Systems.
Oct 10 09:05:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 10 09:05:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 10 09:05:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 10 09:05:01 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 10 09:05:01 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 10 09:05:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 10 09:05:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 09:05:01 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Oct 10 09:05:01 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 10 09:05:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 09:05:01 localhost systemd[1]: Starting Security Auditing Service...
Oct 10 09:05:01 localhost systemd[1]: Starting RPC Bind...
Oct 10 09:05:01 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 10 09:05:01 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 10 09:05:01 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 10 09:05:01 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 10 09:05:01 localhost systemd[1]: Started RPC Bind.
Oct 10 09:05:01 localhost augenrules[709]: /sbin/augenrules: No change
Oct 10 09:05:01 localhost augenrules[724]: No rules
Oct 10 09:05:01 localhost augenrules[724]: enabled 1
Oct 10 09:05:01 localhost augenrules[724]: failure 1
Oct 10 09:05:01 localhost augenrules[724]: pid 704
Oct 10 09:05:01 localhost augenrules[724]: rate_limit 0
Oct 10 09:05:01 localhost augenrules[724]: backlog_limit 8192
Oct 10 09:05:01 localhost augenrules[724]: lost 0
Oct 10 09:05:01 localhost augenrules[724]: backlog 3
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time 60000
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 10 09:05:01 localhost augenrules[724]: enabled 1
Oct 10 09:05:01 localhost augenrules[724]: failure 1
Oct 10 09:05:01 localhost augenrules[724]: pid 704
Oct 10 09:05:01 localhost augenrules[724]: rate_limit 0
Oct 10 09:05:01 localhost augenrules[724]: backlog_limit 8192
Oct 10 09:05:01 localhost augenrules[724]: lost 0
Oct 10 09:05:01 localhost augenrules[724]: backlog 2
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time 60000
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 10 09:05:01 localhost augenrules[724]: enabled 1
Oct 10 09:05:01 localhost augenrules[724]: failure 1
Oct 10 09:05:01 localhost augenrules[724]: pid 704
Oct 10 09:05:01 localhost augenrules[724]: rate_limit 0
Oct 10 09:05:01 localhost augenrules[724]: backlog_limit 8192
Oct 10 09:05:01 localhost augenrules[724]: lost 0
Oct 10 09:05:01 localhost augenrules[724]: backlog 1
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time 60000
Oct 10 09:05:01 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 10 09:05:01 localhost systemd[1]: Started Security Auditing Service.
Oct 10 09:05:01 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 10 09:05:01 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 10 09:05:01 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 10 09:05:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 10 09:05:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 09:05:01 localhost systemd[1]: Starting Update is Completed...
Oct 10 09:05:01 localhost systemd[1]: Finished Update is Completed.
Oct 10 09:05:01 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 09:05:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 09:05:01 localhost systemd[1]: Reached target System Initialization.
Oct 10 09:05:01 localhost systemd[1]: Started dnf makecache --timer.
Oct 10 09:05:02 localhost systemd[1]: Started Daily rotation of log files.
Oct 10 09:05:02 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 10 09:05:02 localhost systemd[1]: Reached target Timer Units.
Oct 10 09:05:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 09:05:02 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 10 09:05:02 localhost systemd[1]: Reached target Socket Units.
Oct 10 09:05:02 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 10 09:05:02 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:05:02 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 10 09:05:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:05:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:05:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:05:02 localhost systemd-udevd[744]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:05:02 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 10 09:05:02 localhost systemd[1]: Reached target Basic System.
Oct 10 09:05:02 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 10 09:05:02 localhost dbus-broker-lau[771]: Ready
Oct 10 09:05:02 localhost systemd[1]: Starting NTP client/server...
Oct 10 09:05:02 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 10 09:05:02 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 10 09:05:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 10 09:05:02 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 10 09:05:02 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 10 09:05:02 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 10 09:05:02 localhost chronyd[803]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 09:05:02 localhost chronyd[803]: Loaded 0 symmetric keys
Oct 10 09:05:02 localhost systemd[1]: Started irqbalance daemon.
Oct 10 09:05:02 localhost chronyd[803]: Using right/UTC timezone to obtain leap second data
Oct 10 09:05:02 localhost chronyd[803]: Loaded seccomp filter (level 2)
Oct 10 09:05:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 10 09:05:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:05:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:05:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:05:02 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 10 09:05:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 10 09:05:02 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 10 09:05:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 10 09:05:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 10 09:05:02 localhost systemd[1]: Starting User Login Management...
Oct 10 09:05:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 10 09:05:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 10 09:05:02 localhost systemd[1]: Started NTP client/server.
Oct 10 09:05:02 localhost kernel: Console: switching to colour dummy device 80x25
Oct 10 09:05:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 10 09:05:02 localhost kernel: [drm] features: -context_init
Oct 10 09:05:02 localhost kernel: [drm] number of scanouts: 1
Oct 10 09:05:02 localhost kernel: [drm] number of cap sets: 0
Oct 10 09:05:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 10 09:05:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 10 09:05:02 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 10 09:05:02 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 10 09:05:02 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 10 09:05:02 localhost kernel: kvm_amd: TSC scaling supported
Oct 10 09:05:02 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 10 09:05:02 localhost kernel: kvm_amd: Nested Paging enabled
Oct 10 09:05:02 localhost kernel: kvm_amd: LBR virtualization supported
Oct 10 09:05:02 localhost systemd-logind[806]: New seat seat0.
Oct 10 09:05:02 localhost systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 09:05:02 localhost systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 09:05:02 localhost systemd[1]: Started User Login Management.
Oct 10 09:05:02 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Oct 10 09:05:02 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 10 09:05:02 localhost cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 10 Oct 2025 09:05:02 +0000. Up 6.54 seconds.
Oct 10 09:05:03 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 10 09:05:03 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 10 09:05:03 localhost systemd[1]: run-cloud\x2dinit-tmp-tmphm_yn15m.mount: Deactivated successfully.
Oct 10 09:05:03 localhost systemd[1]: Starting Hostname Service...
Oct 10 09:05:03 localhost systemd[1]: Started Hostname Service.
Oct 10 09:05:03 np0005479821.novalocal systemd-hostnamed[857]: Hostname set to <np0005479821.novalocal> (static)
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Reached target Preparation for Network.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Starting Network Manager...
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5248] NetworkManager (version 1.54.1-1.el9) is starting... (boot:175b724b-d2ce-4794-9920-58528258c234)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5255] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5425] manager[0x556165b5a080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5480] hostname: hostname: using hostnamed
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5480] hostname: static hostname changed from (none) to "np0005479821.novalocal"
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5484] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5600] manager[0x556165b5a080]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5600] manager[0x556165b5a080]: rfkill: WWAN hardware radio set enabled
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5686] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5687] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5687] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5688] manager: Networking is enabled by state file
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5690] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5727] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5748] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5773] dhcp: init: Using DHCP client 'internal'
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5775] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5788] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5808] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5814] device (lo): Activation: starting connection 'lo' (a2891a4f-849f-4558-a87b-30149848b6b6)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5822] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5825] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5851] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5854] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5856] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5858] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5859] device (eth0): carrier: link connected
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5861] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5867] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5872] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5878] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5885] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5889] manager: NetworkManager state is now CONNECTING
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5891] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5902] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5907] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Started Network Manager.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Reached target Network.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5977] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.5984] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6007] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6179] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6182] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6192] device (lo): Activation: successful, device activated.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6198] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6200] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6205] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6212] device (eth0): Activation: successful, device activated.
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6227] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:05:03 np0005479821.novalocal NetworkManager[861]: <info>  [1760087103.6231] manager: startup complete
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Reached target NFS client services.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Reached target Remote File Systems.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:05:03 np0005479821.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 10 09:05:03 np0005479821.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 10 Oct 2025 09:05:03 +0000. Up 7.58 seconds.
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.94         | 255.255.255.0 | global | fa:16:3e:de:3f:d7 |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fede:3fd7/64 |       .       |  link  | fa:16:3e:de:3f:d7 |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 10 09:05:04 np0005479821.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Oct 10 09:05:04 np0005479821.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Generating public/private rsa key pair.
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key fingerprint is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: SHA256:rQSdZhNbyFMK3SY1AQZvgr3DsfS6H8AnJHNAOyoAR0w root@np0005479821.novalocal
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key's randomart image is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +---[RSA 3072]----+
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |.+E.o o++**.     |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |...  = =**o.     |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |.   * O @+       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |.  . X @ o       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |. .   O S .      |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: | .     B .       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |      . o        |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |       . .       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |      ...        |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key fingerprint is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: SHA256:K1qzo4paCzfpCGlLrYkF9yH4b97alY5BAFBMf/UGDN0 root@np0005479821.novalocal
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key's randomart image is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +---[ECDSA 256]---+
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |.=+   .+o.       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |  .o   .ooE      |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |    o .   o      |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: | .   o   .       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |o o . . S        |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: | =.+ o   o       |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |+oB.. = +        |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |=O++.=.O         |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |*+++*o=..        |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key fingerprint is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: SHA256:PhO1gb5YfFtdeQ0p9u7rIfzlmt1kxHWIDPBstEdsePE root@np0005479821.novalocal
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: The key's randomart image is:
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +--[ED25519 256]--+
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |        ..ooo... |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |         =.=*o.oo|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |        . B++oE.*|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |       o o + ..o+|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |        S o ... o|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |       + + +  .. |
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |      . = . o.. +|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |         o   o.O.|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: |             .*o+|
Oct 10 09:05:05 np0005479821.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Reached target Network is Online.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting System Logging Service...
Oct 10 09:05:05 np0005479821.novalocal sm-notify[1005]: Version 2.5.4 starting
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting Permit User Sessions...
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 10 09:05:05 np0005479821.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Oct 10 09:05:05 np0005479821.novalocal sshd[1007]: Server listening on :: port 22.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Finished Permit User Sessions.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started Command Scheduler.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started Getty on tty1.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 10 09:05:05 np0005479821.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Oct 10 09:05:05 np0005479821.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Reached target Login Prompts.
Oct 10 09:05:05 np0005479821.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 10% if used.)
Oct 10 09:05:05 np0005479821.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Oct 10 09:05:05 np0005479821.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Oct 10 09:05:05 np0005479821.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Started System Logging Service.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Reached target Multi-User System.
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1009]: Connection reset by 38.102.83.114 port 59958 [preauth]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1018]: Unable to negotiate with 38.102.83.114 port 59974: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1020]: Connection reset by 38.102.83.114 port 59978 [preauth]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1022]: Unable to negotiate with 38.102.83.114 port 59988: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1024]: Unable to negotiate with 38.102.83.114 port 59998: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 10 09:05:05 np0005479821.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1026]: Connection reset by 38.102.83.114 port 60010 [preauth]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1031]: Unable to negotiate with 38.102.83.114 port 60022: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1033]: Unable to negotiate with 38.102.83.114 port 60028: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 10 09:05:05 np0005479821.novalocal sshd-session[1029]: Connection closed by 38.102.83.114 port 60014 [preauth]
Oct 10 09:05:05 np0005479821.novalocal cloud-init[1037]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 10 Oct 2025 09:05:05 +0000. Up 9.49 seconds.
Oct 10 09:05:05 np0005479821.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 10 09:05:06 np0005479821.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1041]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 10 Oct 2025 09:05:06 +0000. Up 9.90 seconds.
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1043]: #############################################################
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1044]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1046]: 256 SHA256:K1qzo4paCzfpCGlLrYkF9yH4b97alY5BAFBMf/UGDN0 root@np0005479821.novalocal (ECDSA)
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1048]: 256 SHA256:PhO1gb5YfFtdeQ0p9u7rIfzlmt1kxHWIDPBstEdsePE root@np0005479821.novalocal (ED25519)
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1050]: 3072 SHA256:rQSdZhNbyFMK3SY1AQZvgr3DsfS6H8AnJHNAOyoAR0w root@np0005479821.novalocal (RSA)
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1051]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1052]: #############################################################
Oct 10 09:05:06 np0005479821.novalocal cloud-init[1041]: Cloud-init v. 24.4-7.el9 finished at Fri, 10 Oct 2025 09:05:06 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.10 seconds
Oct 10 09:05:06 np0005479821.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 10 09:05:06 np0005479821.novalocal systemd[1]: Reached target Cloud-init target.
Oct 10 09:05:06 np0005479821.novalocal systemd[1]: Startup finished in 1.531s (kernel) + 2.431s (initrd) + 6.201s (userspace) = 10.164s.
Oct 10 09:05:08 np0005479821.novalocal chronyd[803]: Selected source 198.50.127.72 (2.centos.pool.ntp.org)
Oct 10 09:05:08 np0005479821.novalocal chronyd[803]: System clock TAI offset set to 37 seconds
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 25 affinity is now unmanaged
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 31 affinity is now unmanaged
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 28 affinity is now unmanaged
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 32 affinity is now unmanaged
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 30 affinity is now unmanaged
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 10 09:05:12 np0005479821.novalocal irqbalance[795]: IRQ 29 affinity is now unmanaged
Oct 10 09:05:13 np0005479821.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:05:19 np0005479821.novalocal sshd-session[1057]: Accepted publickey for zuul from 38.102.83.114 port 40758 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 10 09:05:19 np0005479821.novalocal systemd-logind[806]: New session 1 of user zuul.
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Queued start job for default target Main User Target.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Created slice User Application Slice.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Reached target Paths.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Reached target Timers.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Starting D-Bus User Message Bus Socket...
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Starting Create User's Volatile Files and Directories...
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Reached target Sockets.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Finished Create User's Volatile Files and Directories.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Reached target Basic System.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Reached target Main User Target.
Oct 10 09:05:19 np0005479821.novalocal systemd[1061]: Startup finished in 140ms.
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 10 09:05:19 np0005479821.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 10 09:05:19 np0005479821.novalocal sshd-session[1057]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:05:20 np0005479821.novalocal python3[1144]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:22 np0005479821.novalocal python3[1172]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:32 np0005479821.novalocal python3[1230]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:33 np0005479821.novalocal python3[1270]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 10 09:05:33 np0005479821.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:05:35 np0005479821.novalocal python3[1298]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDEBkxJ4sw2+DK3cAbafLjRenK6XkRzPrF3EgUC0Qy/9kZ0kuErGkKyCEXRNE93NnKaUfoU9ebcJtP/W0B6xem+P337Yb5eT1d5d0DPlSyJ224O/rNncfiIo6YcMhrWXlb8yWwfHogZqjmOgJoH57cdsVMt26tUmFXzrJ1qEBloCvfoEe/tx8o3aeflIhUQ0zm2bbmhRn09oGRCODyyr02YoJZm5GbMiTb7mz8xvM31PEo8DzS5ti1YMOUi76ojLKIS6hZkIk4sUuSXmOwBoYhmyGjvs8csl/rxfVJq3bV+DFnatOKlFCyjgY0Ed4oCeReEGI6h29najM/8mUzfOeBj0dyWj3N3oOwlewtF5ifTB4JPwfEN1Rx37wbEzN/2Q7MOKzeWDxP2E0trD5ey9oqWFCpRpuJURMiPr+A6h070uR8U8vUNxGtH3vAmkuN+p3w79WF1wzlCmcoC+oSdwETcoOqkD84qkNgYJpVVpboSnwBo/H/aPJuJhs/nYPhz+c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:35 np0005479821.novalocal python3[1322]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:36 np0005479821.novalocal python3[1421]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:36 np0005479821.novalocal python3[1492]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087136.1292088-251-167503669705895/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa follow=False checksum=6477c55dd7b29e382b0ff49c34043ebcd2bcc305 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:37 np0005479821.novalocal python3[1615]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:37 np0005479821.novalocal python3[1686]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087137.1738122-306-141917132730035/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa.pub follow=False checksum=8b86d6c8317b3a249fa7c3a90607af8e51a186ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:39 np0005479821.novalocal python3[1734]: ansible-ping Invoked with data=pong
Oct 10 09:05:40 np0005479821.novalocal python3[1758]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:42 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 10 09:05:42 np0005479821.novalocal irqbalance[795]: IRQ 26 affinity is now unmanaged
Oct 10 09:05:42 np0005479821.novalocal python3[1816]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 10 09:05:43 np0005479821.novalocal python3[1848]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479821.novalocal python3[1872]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479821.novalocal python3[1896]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479821.novalocal python3[1920]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479821.novalocal python3[1944]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:45 np0005479821.novalocal python3[1968]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:46 np0005479821.novalocal sudo[1992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irepdvxcygddytyinnkmzkqshpczgibg ; /usr/bin/python3'
Oct 10 09:05:46 np0005479821.novalocal sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:47 np0005479821.novalocal python3[1994]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:47 np0005479821.novalocal sudo[1992]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:47 np0005479821.novalocal sudo[2070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsvlyzhifoxletpmqhgoyamsxwtkhmrc ; /usr/bin/python3'
Oct 10 09:05:47 np0005479821.novalocal sudo[2070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:47 np0005479821.novalocal python3[2072]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:47 np0005479821.novalocal sudo[2070]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:48 np0005479821.novalocal sudo[2143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmeccdvqjnhcrmxaxigougwhyniejaz ; /usr/bin/python3'
Oct 10 09:05:48 np0005479821.novalocal sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:48 np0005479821.novalocal python3[2145]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087147.2093265-31-141772109111794/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:48 np0005479821.novalocal sudo[2143]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:48 np0005479821.novalocal python3[2193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479821.novalocal python3[2217]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479821.novalocal python3[2241]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479821.novalocal python3[2265]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479821.novalocal python3[2289]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479821.novalocal python3[2313]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479821.novalocal python3[2337]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479821.novalocal python3[2361]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479821.novalocal python3[2385]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479821.novalocal python3[2409]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479821.novalocal python3[2433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479821.novalocal python3[2457]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479821.novalocal python3[2481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479821.novalocal python3[2505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479821.novalocal python3[2529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479821.novalocal python3[2553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479821.novalocal python3[2577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479821.novalocal python3[2601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479821.novalocal python3[2625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479821.novalocal python3[2649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479821.novalocal python3[2673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479821.novalocal python3[2697]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479821.novalocal python3[2721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479821.novalocal python3[2745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479821.novalocal python3[2769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479821.novalocal python3[2793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:58 np0005479821.novalocal sudo[2817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdljcwiuipfobedukndpidsfrwijgoec ; /usr/bin/python3'
Oct 10 09:05:58 np0005479821.novalocal sudo[2817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:58 np0005479821.novalocal python3[2819]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:05:58 np0005479821.novalocal systemd[1]: Starting Time & Date Service...
Oct 10 09:05:58 np0005479821.novalocal systemd[1]: Started Time & Date Service.
Oct 10 09:05:58 np0005479821.novalocal systemd-timedated[2821]: Changed time zone to 'UTC' (UTC).
Oct 10 09:05:58 np0005479821.novalocal sudo[2817]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:58 np0005479821.novalocal sudo[2848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asmruzgtabttagivafpoodlevqeasvkh ; /usr/bin/python3'
Oct 10 09:05:58 np0005479821.novalocal sudo[2848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:58 np0005479821.novalocal python3[2850]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:58 np0005479821.novalocal sudo[2848]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:59 np0005479821.novalocal python3[2926]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:59 np0005479821.novalocal python3[2997]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760087159.141685-251-244998891554213/source _original_basename=tmp63xb6hv6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:00 np0005479821.novalocal python3[3097]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:00 np0005479821.novalocal python3[3168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087160.0169353-301-39818264344973/source _original_basename=tmp6pwu0cw9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:01 np0005479821.novalocal sudo[3268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzukizararylxneomppztwqrzfsibkj ; /usr/bin/python3'
Oct 10 09:06:01 np0005479821.novalocal sudo[3268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:01 np0005479821.novalocal python3[3270]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:01 np0005479821.novalocal sudo[3268]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:01 np0005479821.novalocal sudo[3341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyudtxtwdqnxwezfmtgptgwjhmweokiq ; /usr/bin/python3'
Oct 10 09:06:01 np0005479821.novalocal sudo[3341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:01 np0005479821.novalocal python3[3343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087161.3675065-381-72179314444038/source _original_basename=tmpn3081gk8 follow=False checksum=de28d19618025176a7a65eba0e40c742fe7af9f4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:01 np0005479821.novalocal sudo[3341]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:02 np0005479821.novalocal python3[3391]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:02 np0005479821.novalocal python3[3417]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:03 np0005479821.novalocal sudo[3495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntjfkbyksvyjymrjzkrmewaigggwujv ; /usr/bin/python3'
Oct 10 09:06:03 np0005479821.novalocal sudo[3495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:03 np0005479821.novalocal python3[3497]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:03 np0005479821.novalocal sudo[3495]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:03 np0005479821.novalocal sudo[3568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvwmdmyhpbkofxioqdecqrfziculvamt ; /usr/bin/python3'
Oct 10 09:06:03 np0005479821.novalocal sudo[3568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:03 np0005479821.novalocal python3[3570]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087163.1676211-451-255876964588039/source _original_basename=tmp_v0t9x18 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:03 np0005479821.novalocal sudo[3568]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:04 np0005479821.novalocal sudo[3619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnolqgdxsgtxlzcssnxogjktafnjrfsa ; /usr/bin/python3'
Oct 10 09:06:04 np0005479821.novalocal sudo[3619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:04 np0005479821.novalocal python3[3621]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-80e1-2ccb-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:04 np0005479821.novalocal sudo[3619]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:05 np0005479821.novalocal python3[3649]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-80e1-2ccb-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 10 09:06:06 np0005479821.novalocal python3[3677]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:12 np0005479821.novalocal irqbalance[795]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 10 09:06:12 np0005479821.novalocal irqbalance[795]: IRQ 27 affinity is now unmanaged
Oct 10 09:06:15 np0005479821.novalocal chronyd[803]: Selected source 45.61.49.156 (2.centos.pool.ntp.org)
Oct 10 09:06:23 np0005479821.novalocal sudo[3701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tolzrrmqfrolwbqvkelqeuvxpeaaeehm ; /usr/bin/python3'
Oct 10 09:06:23 np0005479821.novalocal sudo[3701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:23 np0005479821.novalocal python3[3703]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:23 np0005479821.novalocal sudo[3701]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:28 np0005479821.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 10 09:07:02 np0005479821.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 10 09:07:02 np0005479821.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6123] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:07:02 np0005479821.novalocal systemd-udevd[3706]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6358] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6400] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6405] device (eth1): carrier: link connected
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6408] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6417] policy: auto-activating connection 'Wired connection 1' (c8beefe8-9fab-3e79-9bba-dd9a73ce9e5c)
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6424] device (eth1): Activation: starting connection 'Wired connection 1' (c8beefe8-9fab-3e79-9bba-dd9a73ce9e5c)
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6425] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6428] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6434] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:07:02 np0005479821.novalocal NetworkManager[861]: <info>  [1760087222.6441] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:03 np0005479821.novalocal python3[3733]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-dbf0-3472-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:07:13 np0005479821.novalocal sudo[3811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvhtktuhekdbjvpwiibyhpypfvciyca ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:13 np0005479821.novalocal sudo[3811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:13 np0005479821.novalocal python3[3813]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:07:13 np0005479821.novalocal sudo[3811]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:13 np0005479821.novalocal sudo[3884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkwmrcisusrwmlugetakhcymtkuvqxhe ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:13 np0005479821.novalocal sudo[3884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:13 np0005479821.novalocal python3[3886]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087233.056038-104-31412342865439/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=80001c15e26961c70baa0b6b8a48de04a91c15a7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:07:13 np0005479821.novalocal sudo[3884]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:14 np0005479821.novalocal sudo[3934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjanhyntaiqqjuangkigkgofxoepckms ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:14 np0005479821.novalocal sudo[3934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:14 np0005479821.novalocal python3[3936]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6096] caught SIGTERM, shutting down normally.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Stopping Network Manager...
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6105] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6105] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6105] dhcp4 (eth0): state changed no lease
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6108] manager: NetworkManager state is now CONNECTING
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6303] dhcp4 (eth1): canceled DHCP transaction
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6304] dhcp4 (eth1): state changed no lease
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[861]: <info>  [1760087234.6355] exiting (success)
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Stopped Network Manager.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Starting Network Manager...
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7055] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:175b724b-d2ce-4794-9920-58528258c234)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7059] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7107] manager[0x55ae4063b070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Starting Hostname Service...
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Started Hostname Service.
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7853] hostname: hostname: using hostnamed
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7854] hostname: static hostname changed from (none) to "np0005479821.novalocal"
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7858] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7862] manager[0x55ae4063b070]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7862] manager[0x55ae4063b070]: rfkill: WWAN hardware radio set enabled
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7887] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7887] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7887] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7888] manager: Networking is enabled by state file
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7889] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7892] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7910] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7919] dhcp: init: Using DHCP client 'internal'
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7921] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7925] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7928] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7934] device (lo): Activation: starting connection 'lo' (a2891a4f-849f-4558-a87b-30149848b6b6)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7938] device (eth0): carrier: link connected
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7941] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7945] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7945] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7949] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7954] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7957] device (eth1): carrier: link connected
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7960] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7963] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c8beefe8-9fab-3e79-9bba-dd9a73ce9e5c) (indicated)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7964] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7967] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7971] device (eth1): Activation: starting connection 'Wired connection 1' (c8beefe8-9fab-3e79-9bba-dd9a73ce9e5c)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7975] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Started Network Manager.
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7978] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7980] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7982] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7983] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7985] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7987] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7989] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7991] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7995] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.7997] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8003] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8005] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8032] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8036] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8040] device (lo): Activation: successful, device activated.
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8047] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8052] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8107] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8147] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8149] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8152] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8155] device (eth0): Activation: successful, device activated.
Oct 10 09:07:14 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087234.8160] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:07:14 np0005479821.novalocal sudo[3934]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:15 np0005479821.novalocal python3[4020]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-dbf0-3472-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:07:16 np0005479821.novalocal sshd-session[4023]: banner exchange: Connection from 195.178.110.15 port 37190: invalid format
Oct 10 09:07:16 np0005479821.novalocal sshd-session[4024]: banner exchange: Connection from 195.178.110.15 port 37196: invalid format
Oct 10 09:07:24 np0005479821.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:07:44 np0005479821.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.3754] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:08:00 np0005479821.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:08:00 np0005479821.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4014] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4015] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4022] device (eth1): Activation: successful, device activated.
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4028] manager: startup complete
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4031] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <warn>  [1760087280.4037] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4044] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4264] dhcp4 (eth1): canceled DHCP transaction
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4264] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4264] dhcp4 (eth1): state changed no lease
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4279] policy: auto-activating connection 'ci-private-network' (678ec1ce-3478-5442-8942-601d574272cc)
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4283] device (eth1): Activation: starting connection 'ci-private-network' (678ec1ce-3478-5442-8942-601d574272cc)
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4284] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4286] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4291] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4298] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4334] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4335] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:08:00 np0005479821.novalocal NetworkManager[3949]: <info>  [1760087280.4341] device (eth1): Activation: successful, device activated.
Oct 10 09:08:10 np0005479821.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:08:11 np0005479821.novalocal systemd[1061]: Starting Mark boot as successful...
Oct 10 09:08:11 np0005479821.novalocal systemd[1061]: Finished Mark boot as successful.
Oct 10 09:08:15 np0005479821.novalocal sshd-session[1071]: Received disconnect from 38.102.83.114 port 40758:11: disconnected by user
Oct 10 09:08:15 np0005479821.novalocal sshd-session[1071]: Disconnected from user zuul 38.102.83.114 port 40758
Oct 10 09:08:15 np0005479821.novalocal sshd-session[1057]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:08:15 np0005479821.novalocal systemd-logind[806]: Session 1 logged out. Waiting for processes to exit.
Oct 10 09:09:17 np0005479821.novalocal sshd-session[4052]: Accepted publickey for zuul from 38.102.83.114 port 36112 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:09:17 np0005479821.novalocal systemd-logind[806]: New session 3 of user zuul.
Oct 10 09:09:17 np0005479821.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 10 09:09:17 np0005479821.novalocal sshd-session[4052]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:09:17 np0005479821.novalocal sudo[4131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sepqhjyeeetniyinjlwmdrnmtjtvqoxh ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:09:17 np0005479821.novalocal sudo[4131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:09:17 np0005479821.novalocal python3[4133]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:09:17 np0005479821.novalocal sudo[4131]: pam_unix(sudo:session): session closed for user root
Oct 10 09:09:18 np0005479821.novalocal sudo[4204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhecphennzaqbeuvvftualoppplbyod ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:09:18 np0005479821.novalocal sudo[4204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:09:18 np0005479821.novalocal python3[4206]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087357.686008-373-106849452300768/source _original_basename=tmplri4kfbg follow=False checksum=0edcb8668707f95c4678608a04fc39cdafb654ec backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:09:18 np0005479821.novalocal sudo[4204]: pam_unix(sudo:session): session closed for user root
Oct 10 09:09:23 np0005479821.novalocal sshd-session[4055]: Connection closed by 38.102.83.114 port 36112
Oct 10 09:09:23 np0005479821.novalocal sshd-session[4052]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:09:23 np0005479821.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 10 09:09:23 np0005479821.novalocal systemd-logind[806]: Session 3 logged out. Waiting for processes to exit.
Oct 10 09:09:23 np0005479821.novalocal systemd-logind[806]: Removed session 3.
Oct 10 09:11:11 np0005479821.novalocal systemd[1061]: Created slice User Background Tasks Slice.
Oct 10 09:11:11 np0005479821.novalocal systemd[1061]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 09:11:11 np0005479821.novalocal systemd[1061]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 09:15:52 np0005479821.novalocal sshd-session[4236]: Accepted publickey for zuul from 38.102.83.114 port 58206 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:15:52 np0005479821.novalocal systemd-logind[806]: New session 4 of user zuul.
Oct 10 09:15:52 np0005479821.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 10 09:15:52 np0005479821.novalocal sshd-session[4236]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:15:52 np0005479821.novalocal sudo[4263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrmnbtjfjldjobirvljrljpnzcgooyrm ; /usr/bin/python3'
Oct 10 09:15:52 np0005479821.novalocal sudo[4263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:52 np0005479821.novalocal python3[4265]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001cfe-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:52 np0005479821.novalocal sudo[4263]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:52 np0005479821.novalocal sudo[4291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabttgxjesfxpxkejmytdavjguaahxva ; /usr/bin/python3'
Oct 10 09:15:52 np0005479821.novalocal sudo[4291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479821.novalocal python3[4293]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479821.novalocal sudo[4291]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479821.novalocal sudo[4318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jctqycomlzlzienfbkdoymocfynasszn ; /usr/bin/python3'
Oct 10 09:15:53 np0005479821.novalocal sudo[4318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479821.novalocal python3[4320]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479821.novalocal sudo[4318]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479821.novalocal sudo[4344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqmqcvlzarekvpufvtqrpovblrkipjsu ; /usr/bin/python3'
Oct 10 09:15:53 np0005479821.novalocal sudo[4344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479821.novalocal python3[4346]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479821.novalocal sudo[4344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479821.novalocal sudo[4370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdqqaqxvkjzrgpzdazyofmaplbivysk ; /usr/bin/python3'
Oct 10 09:15:53 np0005479821.novalocal sudo[4370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479821.novalocal python3[4372]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479821.novalocal sudo[4370]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:54 np0005479821.novalocal sudo[4396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaqkwlihcmhvbudwaeyyawrplqaddun ; /usr/bin/python3'
Oct 10 09:15:54 np0005479821.novalocal sudo[4396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:54 np0005479821.novalocal python3[4398]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:54 np0005479821.novalocal python3[4398]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 10 09:15:54 np0005479821.novalocal sudo[4396]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:55 np0005479821.novalocal sudo[4422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmiphwnabafkyynagegeteovjtunoneu ; /usr/bin/python3'
Oct 10 09:15:55 np0005479821.novalocal sudo[4422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:55 np0005479821.novalocal python3[4424]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:15:55 np0005479821.novalocal systemd[1]: Reloading.
Oct 10 09:15:55 np0005479821.novalocal systemd-rc-local-generator[4444]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:15:55 np0005479821.novalocal sudo[4422]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:56 np0005479821.novalocal sudo[4478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixunmvgnrieiswhzamuzxxcwdpynnedl ; /usr/bin/python3'
Oct 10 09:15:56 np0005479821.novalocal sudo[4478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:56 np0005479821.novalocal python3[4480]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 10 09:15:57 np0005479821.novalocal sudo[4478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479821.novalocal sudo[4504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwscvgumcwimznbgwkpxmsnqqrhgwnhd ; /usr/bin/python3'
Oct 10 09:15:57 np0005479821.novalocal sudo[4504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:57 np0005479821.novalocal python3[4506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:57 np0005479821.novalocal sudo[4504]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479821.novalocal sudo[4532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdlawwtavswqkqknbsbwsfflmcumegy ; /usr/bin/python3'
Oct 10 09:15:57 np0005479821.novalocal sudo[4532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:57 np0005479821.novalocal python3[4534]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:57 np0005479821.novalocal sudo[4532]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479821.novalocal sudo[4560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmwipkgpkincexbtyeqexfgtlsbsequo ; /usr/bin/python3'
Oct 10 09:15:57 np0005479821.novalocal sudo[4560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:58 np0005479821.novalocal python3[4562]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:58 np0005479821.novalocal sudo[4560]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:58 np0005479821.novalocal sudo[4588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ighlxsqfoqyfbjryqytlekwgdtbfwbqw ; /usr/bin/python3'
Oct 10 09:15:58 np0005479821.novalocal sudo[4588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:58 np0005479821.novalocal python3[4590]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:58 np0005479821.novalocal sudo[4588]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:59 np0005479821.novalocal python3[4618]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001d04-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:59 np0005479821.novalocal python3[4647]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:16:02 np0005479821.novalocal sshd-session[4239]: Connection closed by 38.102.83.114 port 58206
Oct 10 09:16:02 np0005479821.novalocal sshd-session[4236]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:16:02 np0005479821.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 10 09:16:02 np0005479821.novalocal systemd[1]: session-4.scope: Consumed 3.433s CPU time.
Oct 10 09:16:02 np0005479821.novalocal systemd-logind[806]: Session 4 logged out. Waiting for processes to exit.
Oct 10 09:16:02 np0005479821.novalocal systemd-logind[806]: Removed session 4.
Oct 10 09:16:04 np0005479821.novalocal sshd-session[4654]: Accepted publickey for zuul from 38.102.83.114 port 33628 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:16:04 np0005479821.novalocal systemd-logind[806]: New session 5 of user zuul.
Oct 10 09:16:04 np0005479821.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 10 09:16:04 np0005479821.novalocal sshd-session[4654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:16:04 np0005479821.novalocal sudo[4681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeokwgnhgfzitzzhmftonrngozydeopl ; /usr/bin/python3'
Oct 10 09:16:04 np0005479821.novalocal sudo[4681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:16:04 np0005479821.novalocal python3[4683]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:37 np0005479821.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:48 np0005479821.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:58 np0005479821.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:17:00 np0005479821.novalocal setsebool[4751]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 10 09:17:00 np0005479821.novalocal setsebool[4751]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:17:13 np0005479821.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:17:34 np0005479821.novalocal dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 09:17:35 np0005479821.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:17:35 np0005479821.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:17:35 np0005479821.novalocal systemd[1]: Reloading.
Oct 10 09:17:35 np0005479821.novalocal systemd-rc-local-generator[5501]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:17:35 np0005479821.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:17:38 np0005479821.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 10 09:17:38 np0005479821.novalocal PackageKit[6875]: daemon start
Oct 10 09:17:38 np0005479821.novalocal systemd[1]: Starting Authorization Manager...
Oct 10 09:17:38 np0005479821.novalocal polkitd[6931]: Started polkitd version 0.117
Oct 10 09:17:39 np0005479821.novalocal polkitd[6931]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:17:39 np0005479821.novalocal polkitd[6931]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:17:39 np0005479821.novalocal polkitd[6931]: Finished loading, compiling and executing 3 rules
Oct 10 09:17:39 np0005479821.novalocal systemd[1]: Started Authorization Manager.
Oct 10 09:17:39 np0005479821.novalocal polkitd[6931]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 10 09:17:39 np0005479821.novalocal systemd[1]: Started PackageKit Daemon.
Oct 10 09:17:40 np0005479821.novalocal sudo[4681]: pam_unix(sudo:session): session closed for user root
Oct 10 09:17:41 np0005479821.novalocal python3[8149]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-c8da-0a8f-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:17:42 np0005479821.novalocal kernel: evm: overlay not supported
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: Starting D-Bus User Message Bus...
Oct 10 09:17:42 np0005479821.novalocal dbus-broker-launch[9013]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 10 09:17:42 np0005479821.novalocal dbus-broker-launch[9013]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: Started D-Bus User Message Bus.
Oct 10 09:17:42 np0005479821.novalocal dbus-broker-lau[9013]: Ready
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: Created slice Slice /user.
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: podman-8887.scope: unit configures an IP firewall, but not running as root.
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: (This warning is only shown for the first unit using IP firewalling.)
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: Started podman-8887.scope.
Oct 10 09:17:42 np0005479821.novalocal systemd[1061]: Started podman-pause-e418510a.scope.
Oct 10 09:17:43 np0005479821.novalocal sshd-session[4657]: Connection closed by 38.102.83.114 port 33628
Oct 10 09:17:43 np0005479821.novalocal sshd-session[4654]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:17:43 np0005479821.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 10 09:17:43 np0005479821.novalocal systemd[1]: session-5.scope: Consumed 1min 5.693s CPU time.
Oct 10 09:17:43 np0005479821.novalocal systemd-logind[806]: Session 5 logged out. Waiting for processes to exit.
Oct 10 09:17:43 np0005479821.novalocal systemd-logind[806]: Removed session 5.
Oct 10 09:17:56 np0005479821.novalocal sshd-session[15637]: Connection closed by 38.102.83.82 port 60704 [preauth]
Oct 10 09:17:56 np0005479821.novalocal sshd-session[15642]: Unable to negotiate with 38.102.83.82 port 60734: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 10 09:17:56 np0005479821.novalocal sshd-session[15639]: Unable to negotiate with 38.102.83.82 port 60746: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 10 09:17:56 np0005479821.novalocal sshd-session[15644]: Connection closed by 38.102.83.82 port 60694 [preauth]
Oct 10 09:17:56 np0005479821.novalocal sshd-session[15645]: Unable to negotiate with 38.102.83.82 port 60718: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 10 09:18:00 np0005479821.novalocal sshd-session[17261]: Accepted publickey for zuul from 38.102.83.114 port 50042 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:18:01 np0005479821.novalocal systemd-logind[806]: New session 6 of user zuul.
Oct 10 09:18:01 np0005479821.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 10 09:18:01 np0005479821.novalocal sshd-session[17261]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:18:01 np0005479821.novalocal python3[17340]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:01 np0005479821.novalocal sudo[17544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvabzspyyaaytexsmozrpvyulzgthpd ; /usr/bin/python3'
Oct 10 09:18:01 np0005479821.novalocal sudo[17544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:01 np0005479821.novalocal python3[17557]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:01 np0005479821.novalocal sudo[17544]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:02 np0005479821.novalocal sudo[17872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erktwddfmrxpacuypgsgaifmbemmxtxt ; /usr/bin/python3'
Oct 10 09:18:02 np0005479821.novalocal sudo[17872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:02 np0005479821.novalocal python3[17881]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005479821.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 10 09:18:02 np0005479821.novalocal useradd[17960]: new group: name=cloud-admin, GID=1002
Oct 10 09:18:02 np0005479821.novalocal useradd[17960]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 10 09:18:02 np0005479821.novalocal sudo[17872]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:03 np0005479821.novalocal sudo[18130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjacrkqujlyegisqewovagopwfwfquqb ; /usr/bin/python3'
Oct 10 09:18:03 np0005479821.novalocal sudo[18130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:03 np0005479821.novalocal python3[18136]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:03 np0005479821.novalocal sudo[18130]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:03 np0005479821.novalocal sudo[18436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpiunhtsyfifbigowvntylqgbwmaxbiw ; /usr/bin/python3'
Oct 10 09:18:03 np0005479821.novalocal sudo[18436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:03 np0005479821.novalocal python3[18445]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:18:03 np0005479821.novalocal sudo[18436]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:04 np0005479821.novalocal sudo[18724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjdlkjwhruromrnmqfcjeadzuzrkucuy ; /usr/bin/python3'
Oct 10 09:18:04 np0005479821.novalocal sudo[18724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:04 np0005479821.novalocal python3[18734]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087883.436974-150-184080211801980/source _original_basename=tmpw93rlh4q follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:18:04 np0005479821.novalocal sudo[18724]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:04 np0005479821.novalocal sudo[19116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudqtjwdfcxcetkvhmwjdijrxhwfzeop ; /usr/bin/python3'
Oct 10 09:18:04 np0005479821.novalocal sudo[19116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:05 np0005479821.novalocal python3[19127]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 10 09:18:05 np0005479821.novalocal systemd[1]: Starting Hostname Service...
Oct 10 09:18:05 np0005479821.novalocal systemd[1]: Started Hostname Service.
Oct 10 09:18:05 np0005479821.novalocal systemd-hostnamed[19251]: Changed pretty hostname to 'compute-0'
Oct 10 09:18:05 compute-0 systemd-hostnamed[19251]: Hostname set to <compute-0> (static)
Oct 10 09:18:05 compute-0 NetworkManager[3949]: <info>  [1760087885.2521] hostname: static hostname changed from "np0005479821.novalocal" to "compute-0"
Oct 10 09:18:05 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:18:05 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:18:05 compute-0 sudo[19116]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:05 compute-0 sshd-session[17285]: Connection closed by 38.102.83.114 port 50042
Oct 10 09:18:05 compute-0 sshd-session[17261]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:18:05 compute-0 systemd-logind[806]: Session 6 logged out. Waiting for processes to exit.
Oct 10 09:18:05 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 10 09:18:05 compute-0 systemd[1]: session-6.scope: Consumed 2.342s CPU time.
Oct 10 09:18:05 compute-0 systemd-logind[806]: Removed session 6.
Oct 10 09:18:15 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:18:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:18:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:18:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 54.276s CPU time.
Oct 10 09:18:35 compute-0 systemd[1]: run-r0e8f9d298a234ff18759b2eacb73a7a2.service: Deactivated successfully.
Oct 10 09:18:35 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:20:11 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 10 09:20:11 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 10 09:20:11 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 10 09:20:11 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 10 09:21:11 compute-0 systemd[1]: Starting dnf makecache...
Oct 10 09:21:11 compute-0 dnf[26530]: Failed determining last makecache time.
Oct 10 09:21:11 compute-0 dnf[26530]: CentOS Stream 9 - BaseOS                         36 kB/s | 6.7 kB     00:00
Oct 10 09:21:11 compute-0 dnf[26530]: CentOS Stream 9 - AppStream                      71 kB/s | 6.8 kB     00:00
Oct 10 09:21:11 compute-0 dnf[26530]: CentOS Stream 9 - CRB                            68 kB/s | 6.6 kB     00:00
Oct 10 09:21:12 compute-0 dnf[26530]: CentOS Stream 9 - Extras packages                86 kB/s | 8.0 kB     00:00
Oct 10 09:21:12 compute-0 dnf[26530]: Metadata cache created.
Oct 10 09:21:12 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 10 09:21:12 compute-0 systemd[1]: Finished dnf makecache.
Oct 10 09:21:41 compute-0 sshd-session[26536]: Accepted publickey for zuul from 38.102.83.82 port 35950 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:21:41 compute-0 systemd-logind[806]: New session 7 of user zuul.
Oct 10 09:21:41 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 10 09:21:41 compute-0 sshd-session[26536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:21:41 compute-0 python3[26612]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:21:43 compute-0 sudo[26726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpzzdisvdisaeucwddvwpowtvnqgtmr ; /usr/bin/python3'
Oct 10 09:21:43 compute-0 sudo[26726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:43 compute-0 python3[26728]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:43 compute-0 sudo[26726]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:43 compute-0 sudo[26799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsmfeqlgsfcbcjufdcmkibwghbewavz ; /usr/bin/python3'
Oct 10 09:21:43 compute-0 sudo[26799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-0 python3[26801]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:44 compute-0 sudo[26799]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-0 sudo[26825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbkheuqtoslkzeicfyrcdqewctmfbvro ; /usr/bin/python3'
Oct 10 09:21:44 compute-0 sudo[26825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-0 python3[26827]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:44 compute-0 sudo[26825]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-0 sudo[26898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrfrqgdiszjeqvsaqteasyvqrrtkwtwi ; /usr/bin/python3'
Oct 10 09:21:44 compute-0 sudo[26898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-0 python3[26900]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:44 compute-0 sudo[26898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-0 sudo[26924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdjvbumizajlpcqjedpzjubdcbywcii ; /usr/bin/python3'
Oct 10 09:21:44 compute-0 sudo[26924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-0 python3[26926]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:44 compute-0 sudo[26924]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-0 sudo[26997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaidqgzcqdaqtpfrtarpcgsbhyrsguvw ; /usr/bin/python3'
Oct 10 09:21:45 compute-0 sudo[26997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-0 python3[26999]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:45 compute-0 sudo[26997]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-0 sudo[27023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwavhzymqictqmcraejiczozzewvlys ; /usr/bin/python3'
Oct 10 09:21:45 compute-0 sudo[27023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-0 python3[27025]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:45 compute-0 sudo[27023]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-0 sudo[27096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uptxrofzzjulroollwuhkkvkfkybecxz ; /usr/bin/python3'
Oct 10 09:21:45 compute-0 sudo[27096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-0 python3[27098]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:45 compute-0 sudo[27096]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-0 sudo[27122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqogecervoywwjfzatkfsfiyudovrstb ; /usr/bin/python3'
Oct 10 09:21:45 compute-0 sudo[27122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-0 python3[27124]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:46 compute-0 sudo[27122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-0 sudo[27195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdymtswrxtuazjsdjbwpckhkzaecoiil ; /usr/bin/python3'
Oct 10 09:21:46 compute-0 sudo[27195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-0 python3[27197]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:46 compute-0 sudo[27195]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-0 sudo[27221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxcwkuckheccxofypjfmisnfzgftjlrv ; /usr/bin/python3'
Oct 10 09:21:46 compute-0 sudo[27221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-0 python3[27223]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:46 compute-0 sudo[27221]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-0 sudo[27294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhfjaoqdlxgxfsawnxzutglsptesayrk ; /usr/bin/python3'
Oct 10 09:21:46 compute-0 sudo[27294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-0 python3[27296]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:47 compute-0 sudo[27294]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:47 compute-0 sudo[27320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdkxjdtlfvvrovfodyqojffzpthaywe ; /usr/bin/python3'
Oct 10 09:21:47 compute-0 sudo[27320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:47 compute-0 python3[27322]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:47 compute-0 sudo[27320]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:47 compute-0 sudo[27393]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqwempuljjhvtxriohvxwsxmypiodpq ; /usr/bin/python3'
Oct 10 09:21:47 compute-0 sudo[27393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:47 compute-0 python3[27395]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.298544-30669-87568050472940/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:47 compute-0 sudo[27393]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:50 compute-0 sshd-session[27421]: Unable to negotiate with 192.168.122.11 port 48848: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 10 09:21:50 compute-0 sshd-session[27420]: Connection closed by 192.168.122.11 port 48820 [preauth]
Oct 10 09:21:50 compute-0 sshd-session[27422]: Unable to negotiate with 192.168.122.11 port 48846: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 10 09:21:50 compute-0 sshd-session[27424]: Connection closed by 192.168.122.11 port 48836 [preauth]
Oct 10 09:21:50 compute-0 sshd-session[27423]: Unable to negotiate with 192.168.122.11 port 48854: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 10 09:21:59 compute-0 python3[27453]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:22:44 compute-0 PackageKit[6875]: daemon quit
Oct 10 09:22:44 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 09:26:58 compute-0 sshd-session[26539]: Received disconnect from 38.102.83.82 port 35950:11: disconnected by user
Oct 10 09:26:58 compute-0 sshd-session[26539]: Disconnected from user zuul 38.102.83.82 port 35950
Oct 10 09:26:58 compute-0 sshd-session[26536]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:26:58 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 10 09:26:58 compute-0 systemd[1]: session-7.scope: Consumed 4.860s CPU time.
Oct 10 09:26:58 compute-0 systemd-logind[806]: Session 7 logged out. Waiting for processes to exit.
Oct 10 09:26:58 compute-0 systemd-logind[806]: Removed session 7.
Oct 10 09:27:59 compute-0 sshd-session[27457]: Received disconnect from 158.160.80.249 port 39740:11:  [preauth]
Oct 10 09:27:59 compute-0 sshd-session[27457]: Disconnected from authenticating user root 158.160.80.249 port 39740 [preauth]
Oct 10 09:33:17 compute-0 sshd-session[27461]: Accepted publickey for zuul from 192.168.122.30 port 45444 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:33:17 compute-0 systemd-logind[806]: New session 8 of user zuul.
Oct 10 09:33:17 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 10 09:33:17 compute-0 sshd-session[27461]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:33:18 compute-0 python3.9[27614]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:20 compute-0 sudo[27793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgfnscnbdssxqwbznivvkyjeaakqeuse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088799.6035497-56-228143142305139/AnsiballZ_command.py'
Oct 10 09:33:20 compute-0 sudo[27793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:20 compute-0 python3.9[27795]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:33:27 compute-0 sudo[27793]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:28 compute-0 sshd-session[27464]: Connection closed by 192.168.122.30 port 45444
Oct 10 09:33:28 compute-0 sshd-session[27461]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:33:28 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 10 09:33:28 compute-0 systemd[1]: session-8.scope: Consumed 7.971s CPU time.
Oct 10 09:33:28 compute-0 systemd-logind[806]: Session 8 logged out. Waiting for processes to exit.
Oct 10 09:33:28 compute-0 systemd-logind[806]: Removed session 8.
Oct 10 09:33:43 compute-0 sshd-session[27853]: Accepted publickey for zuul from 192.168.122.30 port 43890 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:33:43 compute-0 systemd-logind[806]: New session 9 of user zuul.
Oct 10 09:33:43 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 10 09:33:43 compute-0 sshd-session[27853]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:33:44 compute-0 python3.9[28006]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 09:33:45 compute-0 python3.9[28180]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:46 compute-0 sudo[28330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpcizcossjxlchhcmetadzyyddsytiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088826.026892-93-36504369718280/AnsiballZ_command.py'
Oct 10 09:33:46 compute-0 sudo[28330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:46 compute-0 python3.9[28332]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:33:46 compute-0 sudo[28330]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:47 compute-0 sudo[28483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuhvhzrnuplslxjiczpdybyiuwzhpuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088827.1683054-129-258072627923058/AnsiballZ_stat.py'
Oct 10 09:33:47 compute-0 sudo[28483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:47 compute-0 python3.9[28485]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:33:47 compute-0 sudo[28483]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:48 compute-0 sudo[28635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjgctwvxmkxmbtctwsjdyfehwnyorigd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088828.110011-153-145389389019706/AnsiballZ_file.py'
Oct 10 09:33:48 compute-0 sudo[28635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:48 compute-0 python3.9[28637]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:33:48 compute-0 sudo[28635]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:49 compute-0 sudo[28787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfehsayuznpvmrlmmrenjperrepmpdfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088829.143151-177-112182732154916/AnsiballZ_stat.py'
Oct 10 09:33:49 compute-0 sudo[28787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:49 compute-0 python3.9[28789]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:33:49 compute-0 sudo[28787]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:50 compute-0 sudo[28910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlphysiyqvpwkkyayijehyzwjbhwvjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088829.143151-177-112182732154916/AnsiballZ_copy.py'
Oct 10 09:33:50 compute-0 sudo[28910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:50 compute-0 python3.9[28912]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760088829.143151-177-112182732154916/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:33:50 compute-0 sudo[28910]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:51 compute-0 sudo[29062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lajygvetahsvskzesitkalyesnslnlbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088830.821796-222-13839406736101/AnsiballZ_setup.py'
Oct 10 09:33:51 compute-0 sudo[29062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:51 compute-0 python3.9[29064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:51 compute-0 sudo[29062]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:52 compute-0 sudo[29218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsaoljwskoljdoovcnhxjevpaymmqehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088831.9537377-246-236078014623811/AnsiballZ_file.py'
Oct 10 09:33:52 compute-0 sudo[29218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:52 compute-0 python3.9[29220]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:33:52 compute-0 sudo[29218]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:53 compute-0 python3.9[29370]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:34:00 compute-0 python3.9[29625]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:34:01 compute-0 python3.9[29775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:34:02 compute-0 python3.9[29929]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:34:03 compute-0 sudo[30085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmvhtcpmfpyjavmdobglisvxmrsnqul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088842.9012954-390-176720146307633/AnsiballZ_setup.py'
Oct 10 09:34:03 compute-0 sudo[30085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:34:03 compute-0 python3.9[30087]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:34:03 compute-0 sudo[30085]: pam_unix(sudo:session): session closed for user root
Oct 10 09:34:04 compute-0 sudo[30169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikpjunysevlpeswwgzuverrtwzcjfbdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088842.9012954-390-176720146307633/AnsiballZ_dnf.py'
Oct 10 09:34:04 compute-0 sudo[30169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:34:04 compute-0 python3.9[30171]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:34:48 compute-0 systemd[1]: Reloading.
Oct 10 09:34:48 compute-0 systemd-rc-local-generator[30368]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:48 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 10 09:34:48 compute-0 systemd[1]: Reloading.
Oct 10 09:34:48 compute-0 systemd-rc-local-generator[30407]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:48 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 10 09:34:48 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 10 09:34:48 compute-0 systemd[1]: Reloading.
Oct 10 09:34:48 compute-0 systemd-rc-local-generator[30447]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:49 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 10 09:34:49 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:34:49 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:34:49 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:35:53 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:35:53 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:35:53 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 10 09:35:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:35:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:35:53 compute-0 systemd[1]: Reloading.
Oct 10 09:35:53 compute-0 systemd-rc-local-generator[30772]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:35:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:35:53 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 10 09:35:53 compute-0 PackageKit[31017]: daemon start
Oct 10 09:35:53 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 10 09:35:54 compute-0 sudo[30169]: pam_unix(sudo:session): session closed for user root
Oct 10 09:35:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:35:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:35:54 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.197s CPU time.
Oct 10 09:35:54 compute-0 systemd[1]: run-ra9f25656a6fc42ed882fa17a33712b5a.service: Deactivated successfully.
Oct 10 09:36:01 compute-0 sudo[31689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhdigcogqrlusskwlxculdynipxibjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088960.7953897-426-166937795678328/AnsiballZ_command.py'
Oct 10 09:36:01 compute-0 sudo[31689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:01 compute-0 python3.9[31691]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:02 compute-0 sudo[31689]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:03 compute-0 sudo[31970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrmnvivdwfvfvujdxowjjdvoqhfsvbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088962.7434077-450-213146176492731/AnsiballZ_selinux.py'
Oct 10 09:36:03 compute-0 sudo[31970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:03 compute-0 python3.9[31972]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 09:36:03 compute-0 sudo[31970]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:04 compute-0 sudo[32122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-povdfuwahlqptttmqsbntlyzgdipumeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088964.31994-483-29456058722881/AnsiballZ_command.py'
Oct 10 09:36:04 compute-0 sudo[32122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:04 compute-0 python3.9[32124]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 09:36:05 compute-0 sudo[32122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:06 compute-0 sudo[32275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvbshelsdnxioermkdloybmeygpvewom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088966.0453284-507-119735944291112/AnsiballZ_file.py'
Oct 10 09:36:06 compute-0 sudo[32275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:07 compute-0 python3.9[32277]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:36:07 compute-0 sudo[32275]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:08 compute-0 sudo[32427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhocgugprlfbkmjpfuzlwvjexlfljof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088968.4659111-531-17773208717720/AnsiballZ_mount.py'
Oct 10 09:36:08 compute-0 sudo[32427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:09 compute-0 python3.9[32429]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 09:36:09 compute-0 sudo[32427]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:10 compute-0 sudo[32579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evswicnawxxkleqhlgxbjvjwbcmxysin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.136043-615-266217069624105/AnsiballZ_file.py'
Oct 10 09:36:10 compute-0 sudo[32579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:10 compute-0 python3.9[32581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:10 compute-0 sudo[32579]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:11 compute-0 sudo[32731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcokydqnfncotdyvnuhdnibsakkypqsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.8920195-639-78154578530171/AnsiballZ_stat.py'
Oct 10 09:36:11 compute-0 sudo[32731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:16 compute-0 python3.9[32733]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:16 compute-0 sudo[32731]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:16 compute-0 sudo[32854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhskrnmsttqrjukgvhrlyrcuztajlrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.8920195-639-78154578530171/AnsiballZ_copy.py'
Oct 10 09:36:16 compute-0 sudo[32854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:17 compute-0 python3.9[32856]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760088970.8920195-639-78154578530171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:36:17 compute-0 sudo[32854]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:19 compute-0 sudo[33006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyowgdajogyqojnopigumjukyvidewck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088978.483142-720-198979300184837/AnsiballZ_getent.py'
Oct 10 09:36:19 compute-0 sudo[33006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:19 compute-0 python3.9[33008]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 09:36:19 compute-0 sudo[33006]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:20 compute-0 sudo[33159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teidquxcikxvqvhoavitewdlzqfigjqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088979.5882905-744-180244427260238/AnsiballZ_group.py'
Oct 10 09:36:20 compute-0 sudo[33159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:20 compute-0 python3.9[33161]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:36:20 compute-0 groupadd[33162]: group added to /etc/group: name=qemu, GID=107
Oct 10 09:36:20 compute-0 groupadd[33162]: group added to /etc/gshadow: name=qemu
Oct 10 09:36:20 compute-0 groupadd[33162]: new group: name=qemu, GID=107
Oct 10 09:36:20 compute-0 sudo[33159]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:21 compute-0 sudo[33317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pduefttlbfgirymxcjqmlbtkfprdtbas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088980.692965-768-65978060038605/AnsiballZ_user.py'
Oct 10 09:36:21 compute-0 sudo[33317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:21 compute-0 python3.9[33319]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:36:21 compute-0 useradd[33321]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:36:21 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:36:21 compute-0 sudo[33317]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:22 compute-0 sudo[33478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotrrsvlvndmfyxpsnakpzshviewdymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088981.8097732-792-166680955869685/AnsiballZ_getent.py'
Oct 10 09:36:22 compute-0 sudo[33478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:22 compute-0 python3.9[33480]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 09:36:22 compute-0 sudo[33478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:23 compute-0 sudo[33631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfmiishxuomogdyuahvcgzknufpbsavk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088982.8012016-816-272494270073280/AnsiballZ_group.py'
Oct 10 09:36:23 compute-0 sudo[33631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:23 compute-0 python3.9[33633]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:36:23 compute-0 groupadd[33634]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 10 09:36:23 compute-0 groupadd[33634]: group added to /etc/gshadow: name=hugetlbfs
Oct 10 09:36:23 compute-0 groupadd[33634]: new group: name=hugetlbfs, GID=42477
Oct 10 09:36:23 compute-0 sudo[33631]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:24 compute-0 sudo[33789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znsgbdeqaxqbmhueunkbxdnwqjpnsqgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088983.7845786-843-189655068785951/AnsiballZ_file.py'
Oct 10 09:36:24 compute-0 sudo[33789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:24 compute-0 python3.9[33791]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 09:36:24 compute-0 sudo[33789]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:25 compute-0 sudo[33941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjmsizxweyzvvbvyfbwwqikbgfsiqbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088984.8512826-876-200375430529241/AnsiballZ_dnf.py'
Oct 10 09:36:25 compute-0 sudo[33941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:25 compute-0 python3.9[33943]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:36:27 compute-0 sudo[33941]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:27 compute-0 sudo[34094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqgrgzrlrsnfodqsthvdfaotvuihdfvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088987.608751-900-10038200041748/AnsiballZ_file.py'
Oct 10 09:36:27 compute-0 sudo[34094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:28 compute-0 python3.9[34096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:28 compute-0 sudo[34094]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:28 compute-0 sudo[34246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwaxtexpjwalixyhahoeqzrwpttusdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088988.3321815-924-250535475244394/AnsiballZ_stat.py'
Oct 10 09:36:28 compute-0 sudo[34246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:28 compute-0 python3.9[34248]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:28 compute-0 sudo[34246]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:29 compute-0 sudo[34369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvcyxrdwrhkbmrqlrzuqnufpabzdrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088988.3321815-924-250535475244394/AnsiballZ_copy.py'
Oct 10 09:36:29 compute-0 sudo[34369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:29 compute-0 python3.9[34371]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088988.3321815-924-250535475244394/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:29 compute-0 sudo[34369]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:30 compute-0 sudo[34521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfskjzcwumphadcoxnxupapbigjxarir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088989.7402873-969-268787200914635/AnsiballZ_systemd.py'
Oct 10 09:36:30 compute-0 sudo[34521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:30 compute-0 python3.9[34523]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:36:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 10 09:36:30 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 10 09:36:30 compute-0 kernel: Bridge firewalling registered
Oct 10 09:36:30 compute-0 systemd-modules-load[34527]: Inserted module 'br_netfilter'
Oct 10 09:36:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 10 09:36:30 compute-0 sudo[34521]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:31 compute-0 sudo[34681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkcesnjmbhyyuvxzfsrsuupqyehnyiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088991.0493655-993-215754929776155/AnsiballZ_stat.py'
Oct 10 09:36:31 compute-0 sudo[34681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:31 compute-0 python3.9[34683]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:31 compute-0 sudo[34681]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:31 compute-0 sudo[34804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xltomtjqjplvnyjovwubvsgkzwjjrvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088991.0493655-993-215754929776155/AnsiballZ_copy.py'
Oct 10 09:36:31 compute-0 sudo[34804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:32 compute-0 python3.9[34806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088991.0493655-993-215754929776155/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:32 compute-0 sudo[34804]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:32 compute-0 sudo[34956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcgqhmihxwjxvqmvlblvptmxmlwmtek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088992.6552262-1047-247044775674834/AnsiballZ_dnf.py'
Oct 10 09:36:32 compute-0 sudo[34956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:33 compute-0 python3.9[34958]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:36:36 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:36:36 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:36:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:36:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:36:37 compute-0 systemd[1]: Reloading.
Oct 10 09:36:37 compute-0 systemd-rc-local-generator[35022]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:36:37 compute-0 sudo[34956]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:38 compute-0 python3.9[36415]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:36:39 compute-0 python3.9[37502]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 09:36:40 compute-0 python3.9[38340]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:36:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:36:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:36:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.757s CPU time.
Oct 10 09:36:41 compute-0 systemd[1]: run-rd372c37dab924ac29df3cb023c4e6e78.service: Deactivated successfully.
Oct 10 09:36:41 compute-0 sudo[39128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdrtjrwozmsmrggdrlwwkmzjyhtznrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089000.8671832-1164-141467736410746/AnsiballZ_command.py'
Oct 10 09:36:41 compute-0 sudo[39128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:41 compute-0 python3.9[39130]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:36:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:36:41 compute-0 sudo[39128]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:42 compute-0 sudo[39501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodqlbxhddlfcaevpkcqjsnvkobhuope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089002.4959323-1191-140842999580571/AnsiballZ_systemd.py'
Oct 10 09:36:42 compute-0 sudo[39501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:43 compute-0 python3.9[39503]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:43 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 09:36:43 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 09:36:43 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 09:36:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:36:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:36:43 compute-0 sudo[39501]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:44 compute-0 python3.9[39665]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 09:36:47 compute-0 sudo[39816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvxmamrpixrcsikjeiczksoqmtzqemao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089007.5223944-1362-198785906882681/AnsiballZ_systemd.py'
Oct 10 09:36:47 compute-0 sudo[39816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:48 compute-0 python3.9[39818]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:48 compute-0 systemd[1]: Reloading.
Oct 10 09:36:48 compute-0 systemd-rc-local-generator[39845]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:48 compute-0 sudo[39816]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:48 compute-0 sudo[40004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdlcuknbgwvwnsvoavpwknaktoysoqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089008.567659-1362-225209495697303/AnsiballZ_systemd.py'
Oct 10 09:36:48 compute-0 sudo[40004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:49 compute-0 python3.9[40006]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:49 compute-0 systemd[1]: Reloading.
Oct 10 09:36:49 compute-0 systemd-rc-local-generator[40032]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:49 compute-0 sudo[40004]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:50 compute-0 sudo[40193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xextrnghqdhcqjighuexnajjiqdfuzjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089009.967915-1410-84488025061317/AnsiballZ_command.py'
Oct 10 09:36:50 compute-0 sudo[40193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:50 compute-0 python3.9[40195]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:50 compute-0 sudo[40193]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:51 compute-0 sudo[40346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupgtqvzbiqsdmcicnatauxcmrzwxftr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089010.8653207-1434-59143337687214/AnsiballZ_command.py'
Oct 10 09:36:51 compute-0 sudo[40346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:51 compute-0 python3.9[40348]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:51 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 10 09:36:51 compute-0 sudo[40346]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:52 compute-0 sudo[40499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqdrmbohnpqncurbrkjoejhsccyhmiqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089011.9364057-1458-35574019279287/AnsiballZ_command.py'
Oct 10 09:36:52 compute-0 sudo[40499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:52 compute-0 python3.9[40501]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:53 compute-0 sudo[40499]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:54 compute-0 sudo[40661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsoxouznledksibntscrshqtiskpnckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089014.2882478-1482-246573973992700/AnsiballZ_command.py'
Oct 10 09:36:54 compute-0 sudo[40661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:54 compute-0 python3.9[40663]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:54 compute-0 sudo[40661]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:55 compute-0 sudo[40814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjkdtlbjjebaxlfjedlzzqascunhzxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089015.100379-1506-115802551724005/AnsiballZ_systemd.py'
Oct 10 09:36:55 compute-0 sudo[40814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:55 compute-0 python3.9[40816]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:36:55 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 09:36:55 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 09:36:55 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 10 09:36:55 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:36:55 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 09:36:55 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:36:55 compute-0 sudo[40814]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:57 compute-0 sshd-session[27856]: Connection closed by 192.168.122.30 port 43890
Oct 10 09:36:57 compute-0 sshd-session[27853]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:36:57 compute-0 systemd-logind[806]: Session 9 logged out. Waiting for processes to exit.
Oct 10 09:36:57 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 10 09:36:57 compute-0 systemd[1]: session-9.scope: Consumed 2min 14.718s CPU time.
Oct 10 09:36:57 compute-0 systemd-logind[806]: Removed session 9.
Oct 10 09:37:02 compute-0 sshd-session[40846]: Accepted publickey for zuul from 192.168.122.30 port 51960 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:37:02 compute-0 systemd-logind[806]: New session 10 of user zuul.
Oct 10 09:37:02 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 10 09:37:02 compute-0 sshd-session[40846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:37:03 compute-0 python3.9[40999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:05 compute-0 sudo[41153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghkcvmrdfijxygnyvtkdiioaouanntiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089024.5755713-68-128132427242761/AnsiballZ_getent.py'
Oct 10 09:37:05 compute-0 sudo[41153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:05 compute-0 python3.9[41155]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 09:37:05 compute-0 sudo[41153]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:05 compute-0 sudo[41306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltxustzyodjmipugbypsxcgjdywqhifq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089025.4678993-92-188838757140060/AnsiballZ_group.py'
Oct 10 09:37:05 compute-0 sudo[41306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:06 compute-0 python3.9[41308]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:37:06 compute-0 groupadd[41309]: group added to /etc/group: name=openvswitch, GID=42476
Oct 10 09:37:06 compute-0 groupadd[41309]: group added to /etc/gshadow: name=openvswitch
Oct 10 09:37:06 compute-0 groupadd[41309]: new group: name=openvswitch, GID=42476
Oct 10 09:37:06 compute-0 sudo[41306]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:06 compute-0 sudo[41464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuutwlkwuaaqblsuguboxvnrmqetgcxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089026.4785733-116-150089168441698/AnsiballZ_user.py'
Oct 10 09:37:06 compute-0 sudo[41464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:07 compute-0 python3.9[41466]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:37:07 compute-0 useradd[41468]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:37:07 compute-0 useradd[41468]: add 'openvswitch' to group 'hugetlbfs'
Oct 10 09:37:07 compute-0 useradd[41468]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 10 09:37:07 compute-0 sudo[41464]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:07 compute-0 sudo[41624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txstrsznvdohprdjitfmliktcjwizqoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089027.6713245-146-76388453091464/AnsiballZ_setup.py'
Oct 10 09:37:07 compute-0 sudo[41624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:08 compute-0 python3.9[41626]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:37:08 compute-0 sudo[41624]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:08 compute-0 sudo[41708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eydhlqykbfjnvvgjwctzfdcylaejgitr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089027.6713245-146-76388453091464/AnsiballZ_dnf.py'
Oct 10 09:37:08 compute-0 sudo[41708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:09 compute-0 python3.9[41710]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:37:11 compute-0 sudo[41708]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:12 compute-0 sudo[41871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whyhtehvbriqfstnevsuspxutbtcmqxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089031.70014-188-473774088636/AnsiballZ_dnf.py'
Oct 10 09:37:12 compute-0 sudo[41871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:12 compute-0 python3.9[41873]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:23 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:37:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:37:23 compute-0 groupadd[41896]: group added to /etc/group: name=unbound, GID=993
Oct 10 09:37:23 compute-0 groupadd[41896]: group added to /etc/gshadow: name=unbound
Oct 10 09:37:23 compute-0 groupadd[41896]: new group: name=unbound, GID=993
Oct 10 09:37:23 compute-0 useradd[41903]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 10 09:37:23 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 10 09:37:23 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 10 09:37:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:25 compute-0 systemd[1]: Reloading.
Oct 10 09:37:25 compute-0 systemd-sysv-generator[42403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:25 compute-0 systemd-rc-local-generator[42400]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:26 compute-0 sudo[41871]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:26 compute-0 systemd[1]: run-r8968f0ef1a5f410281dcdb536aa300ef.service: Deactivated successfully.
Oct 10 09:37:28 compute-0 sudo[42973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdlexzjjrryfsqaexnqtpvlkpcwzcgbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089048.108229-212-167326985360057/AnsiballZ_systemd.py'
Oct 10 09:37:28 compute-0 sudo[42973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:29 compute-0 python3.9[42975]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:37:29 compute-0 systemd[1]: Reloading.
Oct 10 09:37:29 compute-0 systemd-sysv-generator[43010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:29 compute-0 systemd-rc-local-generator[43004]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:29 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 10 09:37:29 compute-0 chown[43018]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 10 09:37:29 compute-0 ovs-ctl[43023]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 10 09:37:29 compute-0 ovs-ctl[43023]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 10 09:37:29 compute-0 ovs-ctl[43023]: Starting ovsdb-server [  OK  ]
Oct 10 09:37:29 compute-0 ovs-vsctl[43072]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 10 09:37:29 compute-0 ovs-vsctl[43092]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a1a60c06-0b75-41d0-88d4-dc571cb95004\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 10 09:37:29 compute-0 ovs-ctl[43023]: Configuring Open vSwitch system IDs [  OK  ]
Oct 10 09:37:29 compute-0 ovs-vsctl[43098]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 10 09:37:29 compute-0 ovs-ctl[43023]: Enabling remote OVSDB managers [  OK  ]
Oct 10 09:37:29 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 10 09:37:29 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 10 09:37:29 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 10 09:37:29 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 10 09:37:29 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 10 09:37:29 compute-0 ovs-ctl[43143]: Inserting openvswitch module [  OK  ]
Oct 10 09:37:30 compute-0 ovs-ctl[43112]: Starting ovs-vswitchd [  OK  ]
Oct 10 09:37:30 compute-0 ovs-ctl[43112]: Enabling remote OVSDB managers [  OK  ]
Oct 10 09:37:30 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 10 09:37:30 compute-0 ovs-vsctl[43160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 10 09:37:30 compute-0 systemd[1]: Starting Open vSwitch...
Oct 10 09:37:30 compute-0 systemd[1]: Finished Open vSwitch.
Oct 10 09:37:30 compute-0 sudo[42973]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:31 compute-0 python3.9[43312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:32 compute-0 sudo[43462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjkxoelhzkfsjcvqaezcdbljxyypsgbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089051.5915773-266-7751498327058/AnsiballZ_sefcontext.py'
Oct 10 09:37:32 compute-0 sudo[43462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:32 compute-0 python3.9[43464]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 09:37:33 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:37:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:37:33 compute-0 sudo[43462]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:34 compute-0 python3.9[43619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:35 compute-0 sudo[43775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwtsninofebyqrjbnwiqczuhkkmkkgpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089055.4282155-320-42972888047819/AnsiballZ_dnf.py'
Oct 10 09:37:35 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 10 09:37:35 compute-0 sudo[43775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:35 compute-0 python3.9[43777]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:37 compute-0 sudo[43775]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:38 compute-0 sudo[43928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmysbdbqfdpztvpgqnstoehuuaufkqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089057.4362884-344-222669401265567/AnsiballZ_command.py'
Oct 10 09:37:38 compute-0 sudo[43928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:38 compute-0 python3.9[43930]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:37:38 compute-0 sudo[43928]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:39 compute-0 sudo[44215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kztlwjrcinwjowjlirbvcjdnmddqvqwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089059.168213-368-23137547629272/AnsiballZ_file.py'
Oct 10 09:37:39 compute-0 sudo[44215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:39 compute-0 python3.9[44217]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 09:37:39 compute-0 sudo[44215]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:40 compute-0 python3.9[44367]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:37:41 compute-0 sudo[44519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpojfdxlalgjxbjurxnysodlnsvjlavj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089061.0621524-416-230074994855479/AnsiballZ_dnf.py'
Oct 10 09:37:41 compute-0 sudo[44519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:41 compute-0 python3.9[44521]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:43 compute-0 systemd[1]: Reloading.
Oct 10 09:37:43 compute-0 systemd-sysv-generator[44559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:43 compute-0 systemd-rc-local-generator[44553]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:43 compute-0 systemd[1]: run-rfcd357e6502b4a328e5672ccd674b801.service: Deactivated successfully.
Oct 10 09:37:43 compute-0 sudo[44519]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:44 compute-0 sudo[44836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsgnwuggprwzgueksxuynlyatptjrjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089064.197649-440-58462362672827/AnsiballZ_systemd.py'
Oct 10 09:37:44 compute-0 sudo[44836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:44 compute-0 python3.9[44838]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:37:44 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 09:37:44 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 09:37:44 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 09:37:44 compute-0 systemd[1]: Stopping Network Manager...
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8164] caught SIGTERM, shutting down normally.
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8180] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8180] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8180] dhcp4 (eth0): state changed no lease
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8185] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:37:44 compute-0 NetworkManager[3949]: <info>  [1760089064.8240] exiting (success)
Oct 10 09:37:44 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:37:44 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:37:44 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 09:37:44 compute-0 systemd[1]: Stopped Network Manager.
Oct 10 09:37:44 compute-0 systemd[1]: NetworkManager.service: Consumed 10.759s CPU time, 4.1M memory peak, read 0B from disk, written 36.5K to disk.
Oct 10 09:37:44 compute-0 systemd[1]: Starting Network Manager...
Oct 10 09:37:44 compute-0 NetworkManager[44849]: <info>  [1760089064.8985] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:175b724b-d2ce-4794-9920-58528258c234)
Oct 10 09:37:44 compute-0 NetworkManager[44849]: <info>  [1760089064.8987] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:37:44 compute-0 NetworkManager[44849]: <info>  [1760089064.9056] manager[0x559d6e8d5090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:37:44 compute-0 systemd[1]: Starting Hostname Service...
Oct 10 09:37:45 compute-0 systemd[1]: Started Hostname Service.
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0034] hostname: hostname: using hostnamed
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0035] hostname: static hostname changed from (none) to "compute-0"
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0041] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0046] manager[0x559d6e8d5090]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0046] manager[0x559d6e8d5090]: rfkill: WWAN hardware radio set enabled
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0074] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0086] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0086] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0087] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0088] manager: Networking is enabled by state file
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0090] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0094] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0127] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0138] dhcp: init: Using DHCP client 'internal'
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0140] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0147] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0159] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0173] device (lo): Activation: starting connection 'lo' (a2891a4f-849f-4558-a87b-30149848b6b6)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0185] device (eth0): carrier: link connected
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0190] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0196] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0197] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0205] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0212] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0220] device (eth1): carrier: link connected
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0225] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0231] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (678ec1ce-3478-5442-8942-601d574272cc) (indicated)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0231] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0238] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0246] device (eth1): Activation: starting connection 'ci-private-network' (678ec1ce-3478-5442-8942-601d574272cc)
Oct 10 09:37:45 compute-0 systemd[1]: Started Network Manager.
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0252] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0276] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0280] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0282] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0286] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0290] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0293] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0297] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0303] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0311] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0316] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0334] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0365] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0373] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0377] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0382] device (lo): Activation: successful, device activated.
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0389] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0395] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:37:45 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0466] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0472] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0475] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0478] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0480] device (eth1): Activation: successful, device activated.
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0512] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0513] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0518] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0521] device (eth0): Activation: successful, device activated.
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0526] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:37:45 compute-0 NetworkManager[44849]: <info>  [1760089065.0554] manager: startup complete
Oct 10 09:37:45 compute-0 sudo[44836]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:45 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:37:45 compute-0 sudo[45062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seljlozicyocsgfsexyejzrsvckeorrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089065.297523-464-157376473533918/AnsiballZ_dnf.py'
Oct 10 09:37:45 compute-0 sudo[45062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:45 compute-0 python3.9[45064]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:52 compute-0 systemd[1]: Reloading.
Oct 10 09:37:52 compute-0 systemd-rc-local-generator[45115]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:52 compute-0 systemd-sysv-generator[45121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:53 compute-0 systemd[1]: run-r89e32e9cd0834b889822592914b681d9.service: Deactivated successfully.
Oct 10 09:37:53 compute-0 sudo[45062]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:54 compute-0 sudo[45525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsdmspempanaaetowpkgjxrigrdszfav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089074.691537-500-230920992156413/AnsiballZ_stat.py'
Oct 10 09:37:54 compute-0 sudo[45525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:55 compute-0 python3.9[45527]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:37:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:37:55 compute-0 sudo[45525]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:55 compute-0 sudo[45677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mftkrrihydntvtpwzprfjirkxebopiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089075.5132644-527-147253899176854/AnsiballZ_ini_file.py'
Oct 10 09:37:55 compute-0 sudo[45677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:56 compute-0 python3.9[45679]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:56 compute-0 sudo[45677]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:56 compute-0 sudo[45831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iknsdopvafifgctwvhzekabrkuwukbae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089076.7183957-557-114199475006565/AnsiballZ_ini_file.py'
Oct 10 09:37:56 compute-0 sudo[45831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:57 compute-0 python3.9[45833]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:57 compute-0 sudo[45831]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:57 compute-0 sudo[45983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhjxjatwcmlrnfwiaolgzbnnevawyhtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089077.3277972-557-37248364985326/AnsiballZ_ini_file.py'
Oct 10 09:37:57 compute-0 sudo[45983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:57 compute-0 python3.9[45985]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:57 compute-0 sudo[45983]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:58 compute-0 sudo[46135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcykctdbfqmndgmpxkwenpvigbtrhptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089078.2528157-602-225784920620025/AnsiballZ_ini_file.py'
Oct 10 09:37:58 compute-0 sudo[46135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:58 compute-0 python3.9[46137]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:58 compute-0 sudo[46135]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:59 compute-0 sudo[46287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hytpgnrgkttbdbdppjpuevmcfiueakls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089078.93829-602-51386264926829/AnsiballZ_ini_file.py'
Oct 10 09:37:59 compute-0 sudo[46287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:59 compute-0 python3.9[46289]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:59 compute-0 sudo[46287]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:00 compute-0 sudo[46439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edibqqtratwxtzdshhnwuzecvnfwqzht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089079.7702274-647-124580603814021/AnsiballZ_stat.py'
Oct 10 09:38:00 compute-0 sudo[46439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:00 compute-0 python3.9[46441]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:00 compute-0 sudo[46439]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:01 compute-0 sudo[46562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqjtotawylxwdnpenuumlwxkipgckrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089079.7702274-647-124580603814021/AnsiballZ_copy.py'
Oct 10 09:38:01 compute-0 sudo[46562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:01 compute-0 python3.9[46564]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089079.7702274-647-124580603814021/.source _original_basename=.fhcmlbju follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:01 compute-0 sudo[46562]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:01 compute-0 sudo[46714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewabiplkpmkarjzphsnifomzujiujtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089081.4606998-692-173380277871724/AnsiballZ_file.py'
Oct 10 09:38:01 compute-0 sudo[46714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:01 compute-0 python3.9[46716]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:01 compute-0 sudo[46714]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:02 compute-0 sudo[46866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secoowqghaahbfhisrhqvtyufwjhvbih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089082.254111-716-61919964136806/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 10 09:38:02 compute-0 sudo[46866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:02 compute-0 python3.9[46868]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 10 09:38:02 compute-0 sudo[46866]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:03 compute-0 sudo[47018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwioymwbvweweiqiyjfldnwphpfjrut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089083.2308223-743-235392524988924/AnsiballZ_file.py'
Oct 10 09:38:03 compute-0 sudo[47018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:03 compute-0 python3.9[47020]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:03 compute-0 sudo[47018]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:04 compute-0 sudo[47170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcateekkywixkqkgjlqgttzeydzzexhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089084.1053138-773-203128303223277/AnsiballZ_stat.py'
Oct 10 09:38:04 compute-0 sudo[47170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:04 compute-0 sudo[47170]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:05 compute-0 sudo[47293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbfgbpxjazpfjqbttbnxgegskjehzvaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089084.1053138-773-203128303223277/AnsiballZ_copy.py'
Oct 10 09:38:05 compute-0 sudo[47293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:05 compute-0 sudo[47293]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:06 compute-0 sudo[47445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbtbfukwrsutzgzaszspegfyxzjauvve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089085.6431718-818-133061283894242/AnsiballZ_slurp.py'
Oct 10 09:38:06 compute-0 sudo[47445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:06 compute-0 python3.9[47447]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 10 09:38:06 compute-0 sudo[47445]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:07 compute-0 sudo[47620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqidljqqaznwmksffywtzdqunkzzqauo ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.5901072-845-59134727845786/async_wrapper.py j630622325486 300 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.5901072-845-59134727845786/AnsiballZ_edpm_os_net_config.py _'
Oct 10 09:38:07 compute-0 sudo[47620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:07 compute-0 ansible-async_wrapper.py[47622]: Invoked with j630622325486 300 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.5901072-845-59134727845786/AnsiballZ_edpm_os_net_config.py _
Oct 10 09:38:07 compute-0 ansible-async_wrapper.py[47625]: Starting module and watcher
Oct 10 09:38:07 compute-0 ansible-async_wrapper.py[47625]: Start watching 47626 (300)
Oct 10 09:38:07 compute-0 ansible-async_wrapper.py[47626]: Start module (47626)
Oct 10 09:38:07 compute-0 ansible-async_wrapper.py[47622]: Return async_wrapper task started.
Oct 10 09:38:07 compute-0 sudo[47620]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:07 compute-0 python3.9[47627]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 10 09:38:08 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 10 09:38:08 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 10 09:38:08 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 10 09:38:08 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 10 09:38:08 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6469] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6486] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6898] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6901] audit: op="connection-add" uuid="b8d3f325-af7f-4b03-9182-a34750368b18" name="br-ex-br" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6913] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6915] audit: op="connection-add" uuid="18dd541b-bf4d-4f29-a5d8-009ee36d087c" name="br-ex-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6924] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6926] audit: op="connection-add" uuid="e37c3867-4895-4d97-a1b8-604059e49b5b" name="eth1-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6936] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6938] audit: op="connection-add" uuid="a85520f7-af9c-469f-91c9-679b633438b5" name="vlan20-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6949] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6950] audit: op="connection-add" uuid="c57aaa5b-a975-4a21-8452-6d6dc8de645b" name="vlan21-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6960] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6962] audit: op="connection-add" uuid="5aae445c-cb50-4027-97c6-77d155e0a429" name="vlan22-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6972] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6975] audit: op="connection-add" uuid="8d826cff-5848-4f7e-ba95-d3ac9aa5a332" name="vlan23-port" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.6993] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7007] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7009] audit: op="connection-add" uuid="33f2b542-61ff-4675-abf6-e3932c115291" name="br-ex-if" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7046] audit: op="connection-update" uuid="678ec1ce-3478-5442-8942-601d574272cc" name="ci-private-network" args="ovs-external-ids.data,connection.master,connection.controller,connection.port-type,connection.slave-type,connection.timestamp,ipv4.method,ipv4.routes,ipv4.dns,ipv4.never-default,ipv4.routing-rules,ipv4.addresses,ovs-interface.type,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.addresses" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7059] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7060] audit: op="connection-add" uuid="a9a12aaf-69da-41fb-a006-5af84e5d464c" name="vlan20-if" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7074] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7076] audit: op="connection-add" uuid="c076b722-f842-40d1-aadf-be53a0b40537" name="vlan21-if" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7091] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7093] audit: op="connection-add" uuid="17abac88-33c2-4b20-b17b-6bf931c0eb0a" name="vlan22-if" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7107] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7109] audit: op="connection-add" uuid="edbc1719-da23-46a0-9897-0943a4bca33d" name="vlan23-if" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7119] audit: op="connection-delete" uuid="c8beefe8-9fab-3e79-9bba-dd9a73ce9e5c" name="Wired connection 1" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7129] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7138] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7141] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b8d3f325-af7f-4b03-9182-a34750368b18)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7142] audit: op="connection-activate" uuid="b8d3f325-af7f-4b03-9182-a34750368b18" name="br-ex-br" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7144] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7149] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7152] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (18dd541b-bf4d-4f29-a5d8-009ee36d087c)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7154] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7158] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7161] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e37c3867-4895-4d97-a1b8-604059e49b5b)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7163] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7168] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7172] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (a85520f7-af9c-469f-91c9-679b633438b5)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7174] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7179] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7182] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c57aaa5b-a975-4a21-8452-6d6dc8de645b)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7184] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7189] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7192] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5aae445c-cb50-4027-97c6-77d155e0a429)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7194] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7200] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7203] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8d826cff-5848-4f7e-ba95-d3ac9aa5a332)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7204] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7206] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7208] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7212] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7216] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7219] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (33f2b542-61ff-4675-abf6-e3932c115291)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7220] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7224] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7226] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7228] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7229] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7238] device (eth1): disconnecting for new activation request.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7239] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7242] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7244] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7245] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7247] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7251] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7254] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (a9a12aaf-69da-41fb-a006-5af84e5d464c)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7255] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7257] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7259] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7260] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7262] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7266] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7270] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c076b722-f842-40d1-aadf-be53a0b40537)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7271] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7273] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7275] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7276] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7278] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7282] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7285] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (17abac88-33c2-4b20-b17b-6bf931c0eb0a)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7286] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7290] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7291] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7292] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7295] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7298] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7302] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (edbc1719-da23-46a0-9897-0943a4bca33d)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7303] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7305] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7309] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7311] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7313] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7327] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7330] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7333] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7335] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7341] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7344] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7348] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7350] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7352] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7356] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7359] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7362] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7364] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7367] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7370] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7373] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7374] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 systemd-udevd[47632]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-0 kernel: Timeout policy base is empty
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7378] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7381] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7383] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7385] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7388] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7391] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7391] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7391] dhcp4 (eth0): state changed no lease
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7392] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7413] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7416] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47628 uid=0 result="fail" reason="Device is not activated"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7422] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 10 09:38:09 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7457] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7460] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7466] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7525] device (eth1): disconnecting for new activation request.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7526] audit: op="connection-activate" uuid="678ec1ce-3478-5442-8942-601d574272cc" name="ci-private-network" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7533] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7569] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47628 uid=0 result="success"
Oct 10 09:38:09 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7597] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-0 kernel: br-ex: entered promiscuous mode
Oct 10 09:38:09 compute-0 kernel: vlan22: entered promiscuous mode
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7782] device (eth1): Activation: starting connection 'ci-private-network' (678ec1ce-3478-5442-8942-601d574272cc)
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7789] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7800] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7803] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 systemd-udevd[47633]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7809] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7813] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7821] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7823] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7824] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7825] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7826] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7827] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 kernel: vlan23: entered promiscuous mode
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7870] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7878] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7882] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7885] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7889] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7892] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7896] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7899] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7902] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7906] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7910] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7913] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7917] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7928] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7936] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7942] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 kernel: vlan20: entered promiscuous mode
Oct 10 09:38:09 compute-0 systemd-udevd[47634]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7963] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7970] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7976] device (eth1): Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7982] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7988] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.7998] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8011] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 kernel: vlan21: entered promiscuous mode
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8023] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8037] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8038] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8041] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8046] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8051] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8057] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8067] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8071] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8076] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8086] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8101] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8147] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8148] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8153] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8158] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8172] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8206] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8208] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-0 NetworkManager[44849]: <info>  [1760089089.8214] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:10 compute-0 NetworkManager[44849]: <info>  [1760089090.9562] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.1197] checkpoint[0x559d6e8ac950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.1200] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 sudo[47984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsdyavmfsdrvmucbpwflpnutooesmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.6213024-845-149861220909057/AnsiballZ_async_status.py'
Oct 10 09:38:11 compute-0 sudo[47984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:11 compute-0 python3.9[47987]: ansible-ansible.legacy.async_status Invoked with jid=j630622325486.47622 mode=status _async_dir=/root/.ansible_async
Oct 10 09:38:11 compute-0 sudo[47984]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.4235] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.4261] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.6766] audit: op="networking-control" arg="global-dns-configuration" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.6793] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.6828] audit: op="networking-control" arg="global-dns-configuration" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.6850] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.8355] checkpoint[0x559d6e8aca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 10 09:38:11 compute-0 NetworkManager[44849]: <info>  [1760089091.8359] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47628 uid=0 result="success"
Oct 10 09:38:11 compute-0 ansible-async_wrapper.py[47626]: Module complete (47626)
Oct 10 09:38:12 compute-0 ansible-async_wrapper.py[47625]: Done in kid B.
Oct 10 09:38:14 compute-0 sudo[48090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-carrdqzaobwitdptzdgjhuxcjaxdwzrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.6213024-845-149861220909057/AnsiballZ_async_status.py'
Oct 10 09:38:14 compute-0 sudo[48090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:14 compute-0 python3.9[48092]: ansible-ansible.legacy.async_status Invoked with jid=j630622325486.47622 mode=status _async_dir=/root/.ansible_async
Oct 10 09:38:14 compute-0 sudo[48090]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:38:15 compute-0 sudo[48192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjghpenwhonmburmldolfvigwohbbjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.6213024-845-149861220909057/AnsiballZ_async_status.py'
Oct 10 09:38:15 compute-0 sudo[48192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:15 compute-0 python3.9[48194]: ansible-ansible.legacy.async_status Invoked with jid=j630622325486.47622 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 09:38:15 compute-0 sudo[48192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:16 compute-0 sudo[48344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfproixkdatccdfnpaebqzwayfhbajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089095.81727-926-150373356396673/AnsiballZ_stat.py'
Oct 10 09:38:16 compute-0 sudo[48344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:16 compute-0 python3.9[48346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:16 compute-0 sudo[48344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:16 compute-0 sudo[48467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyyzqggsknphuuxjggeknbpfnragwfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089095.81727-926-150373356396673/AnsiballZ_copy.py'
Oct 10 09:38:16 compute-0 sudo[48467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:16 compute-0 python3.9[48469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089095.81727-926-150373356396673/.source.returncode _original_basename=.jjghpcb5 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:16 compute-0 sudo[48467]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:17 compute-0 sudo[48619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmmalaboaibghuferrdcssuejshxwzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089097.2239902-974-103993447275055/AnsiballZ_stat.py'
Oct 10 09:38:17 compute-0 sudo[48619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:17 compute-0 python3.9[48621]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:17 compute-0 sudo[48619]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:18 compute-0 sudo[48743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsksvcywljfwpkfgodlgflvdissbvkyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089097.2239902-974-103993447275055/AnsiballZ_copy.py'
Oct 10 09:38:18 compute-0 sudo[48743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:18 compute-0 python3.9[48745]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089097.2239902-974-103993447275055/.source.cfg _original_basename=.rbnr2569 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:18 compute-0 sudo[48743]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:19 compute-0 sudo[48895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfjvpzkzvwfuggqpcbklfgekrevhvrto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089098.8146737-1019-159845261043414/AnsiballZ_systemd.py'
Oct 10 09:38:19 compute-0 sudo[48895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:19 compute-0 python3.9[48897]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:38:19 compute-0 systemd[1]: Reloading Network Manager...
Oct 10 09:38:19 compute-0 NetworkManager[44849]: <info>  [1760089099.4796] audit: op="reload" arg="0" pid=48901 uid=0 result="success"
Oct 10 09:38:19 compute-0 NetworkManager[44849]: <info>  [1760089099.4804] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 10 09:38:19 compute-0 systemd[1]: Reloaded Network Manager.
Oct 10 09:38:19 compute-0 sudo[48895]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:19 compute-0 sshd-session[40849]: Connection closed by 192.168.122.30 port 51960
Oct 10 09:38:20 compute-0 sshd-session[40846]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:38:20 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 10 09:38:20 compute-0 systemd[1]: session-10.scope: Consumed 50.967s CPU time.
Oct 10 09:38:20 compute-0 systemd-logind[806]: Session 10 logged out. Waiting for processes to exit.
Oct 10 09:38:20 compute-0 systemd-logind[806]: Removed session 10.
Oct 10 09:38:25 compute-0 sshd-session[48933]: Accepted publickey for zuul from 192.168.122.30 port 44752 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:38:25 compute-0 systemd-logind[806]: New session 11 of user zuul.
Oct 10 09:38:25 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 10 09:38:25 compute-0 sshd-session[48933]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:38:26 compute-0 python3.9[49086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:27 compute-0 python3.9[49240]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:29 compute-0 python3.9[49434]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:38:29 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:38:29 compute-0 sshd-session[48936]: Connection closed by 192.168.122.30 port 44752
Oct 10 09:38:29 compute-0 sshd-session[48933]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:38:29 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 10 09:38:29 compute-0 systemd[1]: session-11.scope: Consumed 2.500s CPU time.
Oct 10 09:38:29 compute-0 systemd-logind[806]: Session 11 logged out. Waiting for processes to exit.
Oct 10 09:38:29 compute-0 systemd-logind[806]: Removed session 11.
Oct 10 09:38:35 compute-0 sshd-session[49463]: Accepted publickey for zuul from 192.168.122.30 port 46718 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:38:35 compute-0 systemd-logind[806]: New session 12 of user zuul.
Oct 10 09:38:35 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 10 09:38:35 compute-0 sshd-session[49463]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:38:36 compute-0 python3.9[49616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:37 compute-0 python3.9[49770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:38 compute-0 sudo[49925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajwbfttqoslmfowrqzmxkfgxtpeqbyjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089117.7630932-80-88181108484412/AnsiballZ_setup.py'
Oct 10 09:38:38 compute-0 sudo[49925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:38 compute-0 python3.9[49927]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:38 compute-0 sudo[49925]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:39 compute-0 sudo[50009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbrpyysqnjpcexhayjuwddnjtdrxqjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089117.7630932-80-88181108484412/AnsiballZ_dnf.py'
Oct 10 09:38:39 compute-0 sudo[50009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:39 compute-0 python3.9[50011]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:38:40 compute-0 sudo[50009]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:41 compute-0 sudo[50163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnimxqfltwkiuttazrvvkvfgcxxtdqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089120.7437658-116-5130963517506/AnsiballZ_setup.py'
Oct 10 09:38:41 compute-0 sudo[50163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:41 compute-0 python3.9[50165]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:41 compute-0 sudo[50163]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:42 compute-0 sudo[50358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqxemjwynivuzklvxspvkuqrrfgblzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089122.241803-149-160033835773881/AnsiballZ_file.py'
Oct 10 09:38:42 compute-0 sudo[50358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:42 compute-0 python3.9[50360]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:42 compute-0 sudo[50358]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:43 compute-0 sudo[50510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepixtaakcnxyhdwpecfagixunbtmxrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089123.189799-173-95845447829386/AnsiballZ_command.py'
Oct 10 09:38:43 compute-0 sudo[50510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:43 compute-0 python3.9[50512]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4216834405-merged.mount: Deactivated successfully.
Oct 10 09:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4280254661-merged.mount: Deactivated successfully.
Oct 10 09:38:43 compute-0 podman[50513]: 2025-10-10 09:38:43.987341535 +0000 UTC m=+0.071224912 system refresh
Oct 10 09:38:44 compute-0 sudo[50510]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:44 compute-0 sudo[50675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agbcnjbaoerplueshbfqtztizhbqpine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089124.2821624-197-64460053512030/AnsiballZ_stat.py'
Oct 10 09:38:44 compute-0 sudo[50675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:38:44 compute-0 python3.9[50677]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:45 compute-0 sudo[50675]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:45 compute-0 sudo[50798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjtmwhmcfkxppffijjvesvzmhvvmfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089124.2821624-197-64460053512030/AnsiballZ_copy.py'
Oct 10 09:38:45 compute-0 sudo[50798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:45 compute-0 python3.9[50800]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089124.2821624-197-64460053512030/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d92ed4519dfede7c16c67470c5707de93157d7f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:45 compute-0 sudo[50798]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:46 compute-0 sudo[50950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibzpkiqotninabvhlkjpjhjebaunqso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089125.9918473-242-188267937350217/AnsiballZ_stat.py'
Oct 10 09:38:46 compute-0 sudo[50950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:46 compute-0 python3.9[50952]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:46 compute-0 sudo[50950]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:46 compute-0 sudo[51073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyqlrplyffuecbvxrhuyttrkjennusgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089125.9918473-242-188267937350217/AnsiballZ_copy.py'
Oct 10 09:38:46 compute-0 sudo[51073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:47 compute-0 python3.9[51075]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089125.9918473-242-188267937350217/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:47 compute-0 sudo[51073]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:47 compute-0 sudo[51225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjcbfpzsnpeyewakjlyhmkebqshjlsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089127.4448125-290-230824302715862/AnsiballZ_ini_file.py'
Oct 10 09:38:47 compute-0 sudo[51225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:48 compute-0 python3.9[51227]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:48 compute-0 sudo[51225]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:48 compute-0 sudo[51377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzttqhuiwosqfirojbnaornuqoysnlvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089128.3280203-290-249852901805673/AnsiballZ_ini_file.py'
Oct 10 09:38:48 compute-0 sudo[51377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:48 compute-0 python3.9[51379]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:48 compute-0 sudo[51377]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:49 compute-0 sudo[51529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlbnccnnjpcddywmvqhwgelpoxlfzbkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089128.9290764-290-128217218886970/AnsiballZ_ini_file.py'
Oct 10 09:38:49 compute-0 sudo[51529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:49 compute-0 python3.9[51531]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:49 compute-0 sudo[51529]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:49 compute-0 sudo[51681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwrjydhziwbydfvpimrsyvhjotducbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089129.6178336-290-181596182294421/AnsiballZ_ini_file.py'
Oct 10 09:38:49 compute-0 sudo[51681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:50 compute-0 python3.9[51683]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:50 compute-0 sudo[51681]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:51 compute-0 sudo[51833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehygxpajybbhochvtqglcayjqbntesm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089130.7883487-383-66345522096509/AnsiballZ_dnf.py'
Oct 10 09:38:51 compute-0 sudo[51833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:51 compute-0 python3.9[51835]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:38:52 compute-0 sudo[51833]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:53 compute-0 sudo[51986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewqibpdrmhorhyogaegpcxkewowdxvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089133.2263923-416-247031920856881/AnsiballZ_setup.py'
Oct 10 09:38:53 compute-0 sudo[51986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:53 compute-0 python3.9[51988]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:53 compute-0 sudo[51986]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:54 compute-0 sudo[52140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcxpktlsdjlfnfehycvilfhgqqcdcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089134.1938312-440-80643584821774/AnsiballZ_stat.py'
Oct 10 09:38:54 compute-0 sudo[52140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:54 compute-0 python3.9[52142]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:38:54 compute-0 sudo[52140]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:55 compute-0 sudo[52292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfszmmwipaihmwxumcidrgmgnotaqkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089135.0674675-467-80470921016063/AnsiballZ_stat.py'
Oct 10 09:38:55 compute-0 sudo[52292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:55 compute-0 python3.9[52294]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:38:55 compute-0 sudo[52292]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:56 compute-0 sudo[52444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqwwsrkwsxyuvwrnjqffwzpfinkxlpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089136.016929-497-44557464899302/AnsiballZ_service_facts.py'
Oct 10 09:38:56 compute-0 sudo[52444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:56 compute-0 python3.9[52446]: ansible-service_facts Invoked
Oct 10 09:38:56 compute-0 network[52463]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:38:56 compute-0 network[52464]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:38:56 compute-0 network[52465]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:39:00 compute-0 sudo[52444]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:01 compute-0 sudo[52750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnigwrvcqqqunosdfkngcpjlvtbpuxao ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760089141.1879044-536-208219939846148/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760089141.1879044-536-208219939846148/args'
Oct 10 09:39:01 compute-0 sudo[52750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:01 compute-0 sudo[52750]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:02 compute-0 sudo[52917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjnuxyjaaslcyndhjuzrgxvxkagbbzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089142.1276488-569-162752230794975/AnsiballZ_dnf.py'
Oct 10 09:39:02 compute-0 sudo[52917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:02 compute-0 python3.9[52919]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:39:03 compute-0 sudo[52917]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:05 compute-0 sudo[53070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucuivbpeelftgasoknykthfanoatntxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089144.4585106-608-32703296510837/AnsiballZ_package_facts.py'
Oct 10 09:39:05 compute-0 sudo[53070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:05 compute-0 python3.9[53072]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 09:39:05 compute-0 sudo[53070]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:06 compute-0 sudo[53222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqwgjycwlufdufekyxftrjrexizbqssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089146.5802207-638-27064877163655/AnsiballZ_stat.py'
Oct 10 09:39:06 compute-0 sudo[53222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:07 compute-0 python3.9[53224]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:07 compute-0 sudo[53222]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:07 compute-0 sudo[53347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mksjjrrndsuvsnnxecjdnbkbwflrbquv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089146.5802207-638-27064877163655/AnsiballZ_copy.py'
Oct 10 09:39:07 compute-0 sudo[53347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:07 compute-0 python3.9[53349]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089146.5802207-638-27064877163655/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:07 compute-0 sudo[53347]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:08 compute-0 sudo[53501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lugttssruqcgdizfbhchnfljquigutgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089148.191899-683-169717492507852/AnsiballZ_stat.py'
Oct 10 09:39:08 compute-0 sudo[53501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:08 compute-0 python3.9[53503]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:08 compute-0 sudo[53501]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:09 compute-0 sudo[53626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzxiyliknqsjsawgiqcnrbodhnifmps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089148.191899-683-169717492507852/AnsiballZ_copy.py'
Oct 10 09:39:09 compute-0 sudo[53626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:09 compute-0 python3.9[53628]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089148.191899-683-169717492507852/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:09 compute-0 sudo[53626]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:10 compute-0 sudo[53780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumhxrgjhtprqmzyafmntlrwrrbmmmui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089150.415227-746-109487577530327/AnsiballZ_lineinfile.py'
Oct 10 09:39:10 compute-0 sudo[53780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:11 compute-0 python3.9[53782]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:11 compute-0 sudo[53780]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:12 compute-0 sudo[53934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctduqhetjrpmdkzzpeioaqzjqcidbzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089152.1598248-791-93266380701838/AnsiballZ_setup.py'
Oct 10 09:39:12 compute-0 sudo[53934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:12 compute-0 python3.9[53936]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:39:12 compute-0 sudo[53934]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:13 compute-0 sudo[54018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzmthaknaiuxiljuzhzkqljblanmwur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089152.1598248-791-93266380701838/AnsiballZ_systemd.py'
Oct 10 09:39:13 compute-0 sudo[54018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:14 compute-0 python3.9[54020]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:14 compute-0 sudo[54018]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:16 compute-0 sudo[54172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfiatxhltlibgiuvvwnfubswyyniqjns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089155.9851158-839-37464934047967/AnsiballZ_setup.py'
Oct 10 09:39:16 compute-0 sudo[54172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:16 compute-0 python3.9[54174]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:39:16 compute-0 sudo[54172]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:17 compute-0 sudo[54256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxiymehuvlzjdagrpyqvojdtnuxxzlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089155.9851158-839-37464934047967/AnsiballZ_systemd.py'
Oct 10 09:39:17 compute-0 sudo[54256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:17 compute-0 python3.9[54258]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:39:17 compute-0 chronyd[803]: chronyd exiting
Oct 10 09:39:17 compute-0 systemd[1]: Stopping NTP client/server...
Oct 10 09:39:17 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 10 09:39:17 compute-0 systemd[1]: Stopped NTP client/server.
Oct 10 09:39:17 compute-0 systemd[1]: Starting NTP client/server...
Oct 10 09:39:17 compute-0 chronyd[54267]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 09:39:17 compute-0 chronyd[54267]: Frequency -32.124 +/- 0.331 ppm read from /var/lib/chrony/drift
Oct 10 09:39:17 compute-0 chronyd[54267]: Loaded seccomp filter (level 2)
Oct 10 09:39:17 compute-0 systemd[1]: Started NTP client/server.
Oct 10 09:39:17 compute-0 sudo[54256]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:18 compute-0 sshd-session[49466]: Connection closed by 192.168.122.30 port 46718
Oct 10 09:39:18 compute-0 sshd-session[49463]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:39:18 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 10 09:39:18 compute-0 systemd[1]: session-12.scope: Consumed 26.997s CPU time.
Oct 10 09:39:18 compute-0 systemd-logind[806]: Session 12 logged out. Waiting for processes to exit.
Oct 10 09:39:18 compute-0 systemd-logind[806]: Removed session 12.
Oct 10 09:39:24 compute-0 sshd-session[54293]: Accepted publickey for zuul from 192.168.122.30 port 50176 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:39:24 compute-0 systemd-logind[806]: New session 13 of user zuul.
Oct 10 09:39:24 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 10 09:39:24 compute-0 sshd-session[54293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:39:24 compute-0 sudo[54446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpifdagudlgjvtcelfnmwckkfhjpoex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089164.232885-26-126173162088667/AnsiballZ_file.py'
Oct 10 09:39:24 compute-0 sudo[54446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:24 compute-0 python3.9[54448]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:24 compute-0 sudo[54446]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:25 compute-0 sudo[54598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvekoxwgnekroabrxepxtqpyzaaekma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089165.0890784-62-201988076651422/AnsiballZ_stat.py'
Oct 10 09:39:25 compute-0 sudo[54598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:25 compute-0 python3.9[54600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:25 compute-0 sudo[54598]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:26 compute-0 sudo[54721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvhvfuvjwdwfcxrncujhezwloythidx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089165.0890784-62-201988076651422/AnsiballZ_copy.py'
Oct 10 09:39:26 compute-0 sudo[54721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:26 compute-0 python3.9[54723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089165.0890784-62-201988076651422/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:26 compute-0 sudo[54721]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:26 compute-0 sshd-session[54296]: Connection closed by 192.168.122.30 port 50176
Oct 10 09:39:26 compute-0 sshd-session[54293]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:39:26 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 10 09:39:26 compute-0 systemd[1]: session-13.scope: Consumed 1.647s CPU time.
Oct 10 09:39:26 compute-0 systemd-logind[806]: Session 13 logged out. Waiting for processes to exit.
Oct 10 09:39:26 compute-0 systemd-logind[806]: Removed session 13.
Oct 10 09:39:32 compute-0 sshd-session[54748]: Accepted publickey for zuul from 192.168.122.30 port 50180 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:39:32 compute-0 systemd-logind[806]: New session 14 of user zuul.
Oct 10 09:39:32 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 10 09:39:32 compute-0 sshd-session[54748]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:39:33 compute-0 python3.9[54901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:39:34 compute-0 sudo[55055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yarnnctkdujiprgweoapngkdiyaceaip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089173.8276753-59-150278978069199/AnsiballZ_file.py'
Oct 10 09:39:34 compute-0 sudo[55055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:34 compute-0 python3.9[55057]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:34 compute-0 sudo[55055]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:35 compute-0 sudo[55230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacfxkdlbfklxuqynmaatbhdiqoarjzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089174.746935-83-241330668008588/AnsiballZ_stat.py'
Oct 10 09:39:35 compute-0 sudo[55230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:35 compute-0 python3.9[55232]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:35 compute-0 sudo[55230]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:36 compute-0 sudo[55353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbmpapsumggnydwwfwzgkyvqsissheb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089174.746935-83-241330668008588/AnsiballZ_copy.py'
Oct 10 09:39:36 compute-0 sudo[55353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:36 compute-0 python3.9[55355]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760089174.746935-83-241330668008588/.source.json _original_basename=._g9obmzc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:36 compute-0 sudo[55353]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:37 compute-0 sudo[55505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yienitdabdlztxhkvldmcirnyeczomab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089176.8148715-152-84874178365501/AnsiballZ_stat.py'
Oct 10 09:39:37 compute-0 sudo[55505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:37 compute-0 python3.9[55507]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:37 compute-0 sudo[55505]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:37 compute-0 sudo[55628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwzalrbcgpevrfxqxarwvmmmihkmthiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089176.8148715-152-84874178365501/AnsiballZ_copy.py'
Oct 10 09:39:37 compute-0 sudo[55628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:37 compute-0 python3.9[55630]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089176.8148715-152-84874178365501/.source _original_basename=.d1162ghf follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:37 compute-0 sudo[55628]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:38 compute-0 sudo[55780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlgmjgcekuaedafvvtwkihaqagqvwsam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089178.3491557-200-28975300987753/AnsiballZ_file.py'
Oct 10 09:39:38 compute-0 sudo[55780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:38 compute-0 python3.9[55782]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:38 compute-0 sudo[55780]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:39 compute-0 sudo[55932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmhqrssknvrjhgbdbmqkyfmpcootirwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089179.1991973-224-34278831282791/AnsiballZ_stat.py'
Oct 10 09:39:39 compute-0 sudo[55932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:39 compute-0 python3.9[55934]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:39 compute-0 sudo[55932]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:40 compute-0 sudo[56055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrihglnvngtlxherykogjlzrspeammm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089179.1991973-224-34278831282791/AnsiballZ_copy.py'
Oct 10 09:39:40 compute-0 sudo[56055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:40 compute-0 python3.9[56057]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089179.1991973-224-34278831282791/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:40 compute-0 sudo[56055]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:40 compute-0 sudo[56207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhqkkpybrqfhgygrhezeempjiutonff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089180.4514594-224-88581564229771/AnsiballZ_stat.py'
Oct 10 09:39:40 compute-0 sudo[56207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:40 compute-0 python3.9[56209]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:40 compute-0 sudo[56207]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:41 compute-0 sudo[56330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnstmlzytfkgvnnvegwurrtehgbggjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089180.4514594-224-88581564229771/AnsiballZ_copy.py'
Oct 10 09:39:41 compute-0 sudo[56330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:41 compute-0 python3.9[56332]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089180.4514594-224-88581564229771/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:41 compute-0 sudo[56330]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:42 compute-0 sudo[56482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oazmyxjlhracxtfkypvzzccymunnozrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.1388054-311-14160782909157/AnsiballZ_file.py'
Oct 10 09:39:42 compute-0 sudo[56482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:42 compute-0 python3.9[56484]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:42 compute-0 sudo[56482]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:43 compute-0 sudo[56634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbnurtvucnrfanzpszxugfwavppente ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.9153862-335-121011953396414/AnsiballZ_stat.py'
Oct 10 09:39:43 compute-0 sudo[56634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:43 compute-0 python3.9[56636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:43 compute-0 sudo[56634]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:43 compute-0 sudo[56757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtuhuleotzranyvpyjnpshisqymxhkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.9153862-335-121011953396414/AnsiballZ_copy.py'
Oct 10 09:39:43 compute-0 sudo[56757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:43 compute-0 python3.9[56759]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089182.9153862-335-121011953396414/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:43 compute-0 sudo[56757]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:44 compute-0 sudo[56909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maqdtttqdnzzotsretkbmfrneebnxdxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089184.3111806-380-164137170714407/AnsiballZ_stat.py'
Oct 10 09:39:44 compute-0 sudo[56909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:44 compute-0 python3.9[56911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:44 compute-0 sudo[56909]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:45 compute-0 sudo[57032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbjtgexdcvlajrwwibnivchgifctsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089184.3111806-380-164137170714407/AnsiballZ_copy.py'
Oct 10 09:39:45 compute-0 sudo[57032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:45 compute-0 python3.9[57034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089184.3111806-380-164137170714407/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:45 compute-0 sudo[57032]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:46 compute-0 sudo[57184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abixwezcyqahiopjowdpkxrvwadtjtow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089185.688483-425-250254289213017/AnsiballZ_systemd.py'
Oct 10 09:39:46 compute-0 sudo[57184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:46 compute-0 python3.9[57186]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:46 compute-0 systemd[1]: Reloading.
Oct 10 09:39:46 compute-0 systemd-sysv-generator[57214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:46 compute-0 systemd-rc-local-generator[57207]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:46 compute-0 systemd[1]: Reloading.
Oct 10 09:39:46 compute-0 systemd-rc-local-generator[57255]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:46 compute-0 systemd-sysv-generator[57258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:47 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 10 09:39:47 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 10 09:39:47 compute-0 sudo[57184]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:47 compute-0 sudo[57413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnfnpoxvgilqtawwivdjjazwmzdnzxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089187.4014146-449-253492311141541/AnsiballZ_stat.py'
Oct 10 09:39:47 compute-0 sudo[57413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:47 compute-0 python3.9[57415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:47 compute-0 sudo[57413]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:48 compute-0 sudo[57536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltatsexaszfkgwurgtxurtdjumhgubsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089187.4014146-449-253492311141541/AnsiballZ_copy.py'
Oct 10 09:39:48 compute-0 sudo[57536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:48 compute-0 python3.9[57538]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089187.4014146-449-253492311141541/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:48 compute-0 sudo[57536]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:49 compute-0 sudo[57688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzskijlfagacwydoqgkgxzvkguejjhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089188.7964988-494-31638217668711/AnsiballZ_stat.py'
Oct 10 09:39:49 compute-0 sudo[57688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:49 compute-0 python3.9[57690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:49 compute-0 sudo[57688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:49 compute-0 sudo[57811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltvhghwvytfiqgthgirvkgwtpvmetnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089188.7964988-494-31638217668711/AnsiballZ_copy.py'
Oct 10 09:39:49 compute-0 sudo[57811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:49 compute-0 python3.9[57813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089188.7964988-494-31638217668711/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:50 compute-0 sudo[57811]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:50 compute-0 sudo[57963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhjkvxvgcnpeztrtmqaqkdflaitwvdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089190.2215056-539-92322466303255/AnsiballZ_systemd.py'
Oct 10 09:39:50 compute-0 sudo[57963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:51 compute-0 python3.9[57965]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:51 compute-0 systemd[1]: Reloading.
Oct 10 09:39:51 compute-0 systemd-sysv-generator[57996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:51 compute-0 systemd-rc-local-generator[57993]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:51 compute-0 systemd[1]: Reloading.
Oct 10 09:39:51 compute-0 systemd-sysv-generator[58033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:51 compute-0 systemd-rc-local-generator[58028]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:51 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 09:39:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:39:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:39:51 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 09:39:51 compute-0 sudo[57963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:53 compute-0 python3.9[58192]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:39:53 compute-0 network[58209]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:39:53 compute-0 network[58210]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:39:53 compute-0 network[58211]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:39:58 compute-0 sudo[58473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccnwxrppxfvopxyfvtgnotvfpeunumr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089198.667991-587-210455169708569/AnsiballZ_systemd.py'
Oct 10 09:39:58 compute-0 sudo[58473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:59 compute-0 python3.9[58475]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:59 compute-0 systemd[1]: Reloading.
Oct 10 09:39:59 compute-0 systemd-rc-local-generator[58506]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:59 compute-0 systemd-sysv-generator[58509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:59 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 10 09:39:59 compute-0 iptables.init[58516]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 10 09:39:59 compute-0 iptables.init[58516]: iptables: Flushing firewall rules: [  OK  ]
Oct 10 09:39:59 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 10 09:39:59 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 10 09:40:00 compute-0 sudo[58473]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:00 compute-0 sudo[58710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybnqgxmxvlbfdeojsatsxnocpqtvgmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089200.180485-587-269244018974833/AnsiballZ_systemd.py'
Oct 10 09:40:00 compute-0 sudo[58710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:00 compute-0 python3.9[58712]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:00 compute-0 sudo[58710]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:01 compute-0 sudo[58864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhavjtqvjfwkwmtgjjraeqckmshjipm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089201.3564398-635-164024672050079/AnsiballZ_systemd.py'
Oct 10 09:40:01 compute-0 sudo[58864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:01 compute-0 python3.9[58866]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:02 compute-0 systemd[1]: Reloading.
Oct 10 09:40:02 compute-0 systemd-rc-local-generator[58889]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:40:02 compute-0 systemd-sysv-generator[58894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:40:02 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 10 09:40:02 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 10 09:40:02 compute-0 sudo[58864]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:03 compute-0 sudo[59055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzzyvsziixohnmocwduaxcrxlqntlds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089202.802839-659-106547759692396/AnsiballZ_command.py'
Oct 10 09:40:03 compute-0 sudo[59055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:03 compute-0 python3.9[59057]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:40:03 compute-0 sudo[59055]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:04 compute-0 sudo[59208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-basqcvsoyobqfhxpbkxmxwojefeanncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089204.0424807-701-256487339685397/AnsiballZ_stat.py'
Oct 10 09:40:04 compute-0 sudo[59208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:04 compute-0 python3.9[59210]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:04 compute-0 sudo[59208]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:04 compute-0 sudo[59333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtvthhzjtotclsikpyqnigyeopgrtzpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089204.0424807-701-256487339685397/AnsiballZ_copy.py'
Oct 10 09:40:04 compute-0 sudo[59333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:05 compute-0 python3.9[59335]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089204.0424807-701-256487339685397/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:05 compute-0 sudo[59333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:06 compute-0 python3.9[59486]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:40:06 compute-0 polkitd[6931]: Registered Authentication Agent for unix-process:59488:210978 (system bus name :1.523 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 09:40:31 compute-0 polkitd[6931]: Unregistered Authentication Agent for unix-process:59488:210978 (system bus name :1.523, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 09:40:31 compute-0 polkit-agent-helper-1[59500]: pam_unix(polkit-1:auth): conversation failed
Oct 10 09:40:31 compute-0 polkit-agent-helper-1[59500]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 10 09:40:31 compute-0 polkitd[6931]: Operator of unix-process:59488:210978 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.522 [<unknown>] (owned by unix-user:zuul)
Oct 10 09:40:31 compute-0 sshd-session[54751]: Connection closed by 192.168.122.30 port 50180
Oct 10 09:40:31 compute-0 sshd-session[54748]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:40:31 compute-0 systemd-logind[806]: Session 14 logged out. Waiting for processes to exit.
Oct 10 09:40:31 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 10 09:40:31 compute-0 systemd[1]: session-14.scope: Consumed 20.446s CPU time.
Oct 10 09:40:31 compute-0 systemd-logind[806]: Removed session 14.
Oct 10 09:40:43 compute-0 sshd-session[59526]: Accepted publickey for zuul from 192.168.122.30 port 51934 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:40:44 compute-0 systemd-logind[806]: New session 15 of user zuul.
Oct 10 09:40:44 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 10 09:40:44 compute-0 sshd-session[59526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:40:45 compute-0 python3.9[59679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:40:45 compute-0 sudo[59833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guphygxruydndggmftcevmhyyosfxzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089245.5611215-59-273507512716120/AnsiballZ_file.py'
Oct 10 09:40:45 compute-0 sudo[59833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:46 compute-0 python3.9[59835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:46 compute-0 sudo[59833]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:46 compute-0 sudo[60008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjiyigwbqsprtajxrcteqyrnyoicetnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089246.4641283-83-203844885955505/AnsiballZ_stat.py'
Oct 10 09:40:46 compute-0 sudo[60008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:47 compute-0 python3.9[60010]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:47 compute-0 sudo[60008]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:47 compute-0 sudo[60086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhxlihqfgkaceegqqbpyfephkcavrsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089246.4641283-83-203844885955505/AnsiballZ_file.py'
Oct 10 09:40:47 compute-0 sudo[60086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:47 compute-0 python3.9[60088]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=._v7813yj recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:47 compute-0 sudo[60086]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:48 compute-0 sudo[60238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxainysfoarunvjlgckholgeseveujbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089248.2728028-143-30445205785842/AnsiballZ_stat.py'
Oct 10 09:40:48 compute-0 sudo[60238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:48 compute-0 python3.9[60240]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:48 compute-0 sudo[60238]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:48 compute-0 sudo[60316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsdpxwuzvidzkmbadjfqwegicokreqlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089248.2728028-143-30445205785842/AnsiballZ_file.py'
Oct 10 09:40:48 compute-0 sudo[60316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:49 compute-0 python3.9[60318]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.7b4bwzi0 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:49 compute-0 sudo[60316]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:49 compute-0 sudo[60468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxlivuhtygmeocwkunxarunlgzafgnke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089249.6703393-182-740171359182/AnsiballZ_file.py'
Oct 10 09:40:49 compute-0 sudo[60468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:50 compute-0 python3.9[60470]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:50 compute-0 sudo[60468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:50 compute-0 sudo[60620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfagffwkdccemevfwyiezptfspblweou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089250.4876575-206-175322428897219/AnsiballZ_stat.py'
Oct 10 09:40:50 compute-0 sudo[60620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:50 compute-0 python3.9[60622]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:51 compute-0 sudo[60620]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:51 compute-0 sudo[60698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqidvzqqiwozgxbynayceuwgdiheyvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089250.4876575-206-175322428897219/AnsiballZ_file.py'
Oct 10 09:40:51 compute-0 sudo[60698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:51 compute-0 python3.9[60700]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:51 compute-0 sudo[60698]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:51 compute-0 sudo[60850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abtdqgomogtbjbkikyzojjdzlslirgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089251.5841124-206-141083058688080/AnsiballZ_stat.py'
Oct 10 09:40:51 compute-0 sudo[60850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:52 compute-0 python3.9[60852]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:52 compute-0 sudo[60850]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:52 compute-0 sudo[60928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hximeenrllnfbkyazatnopzrngzjvggq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089251.5841124-206-141083058688080/AnsiballZ_file.py'
Oct 10 09:40:52 compute-0 sudo[60928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:52 compute-0 python3.9[60930]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:52 compute-0 sudo[60928]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:53 compute-0 sudo[61080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhidxgddttcodxiavbidkwipcnroxeuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.058455-275-221821263642971/AnsiballZ_file.py'
Oct 10 09:40:53 compute-0 sudo[61080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:53 compute-0 python3.9[61082]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:53 compute-0 sudo[61080]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:54 compute-0 sudo[61232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doptmbywqdwcfxsegkiihoxageulsfuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.8573806-299-73035044964112/AnsiballZ_stat.py'
Oct 10 09:40:54 compute-0 sudo[61232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:54 compute-0 python3.9[61234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:54 compute-0 sudo[61232]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:54 compute-0 sudo[61310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfbmwwltbgrtwjniorpsnebyzqgjtef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.8573806-299-73035044964112/AnsiballZ_file.py'
Oct 10 09:40:54 compute-0 sudo[61310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:54 compute-0 python3.9[61312]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:54 compute-0 sudo[61310]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:55 compute-0 sudo[61462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbhuudqqwxoikuhajqlivjxwiuxtwwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089255.1163979-335-235141923255908/AnsiballZ_stat.py'
Oct 10 09:40:55 compute-0 sudo[61462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:55 compute-0 python3.9[61464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:55 compute-0 sudo[61462]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:55 compute-0 sudo[61540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbuubtvnanqfubqocaaegtnngpigsjvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089255.1163979-335-235141923255908/AnsiballZ_file.py'
Oct 10 09:40:55 compute-0 sudo[61540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:56 compute-0 python3.9[61542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:56 compute-0 sudo[61540]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:57 compute-0 sudo[61692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdxjenokublalkmizswcfpaebigdgagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089256.4129136-371-119016943598705/AnsiballZ_systemd.py'
Oct 10 09:40:57 compute-0 sudo[61692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:57 compute-0 python3.9[61694]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:57 compute-0 systemd[1]: Reloading.
Oct 10 09:40:57 compute-0 systemd-rc-local-generator[61724]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:40:57 compute-0 systemd-sysv-generator[61727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:40:57 compute-0 sudo[61692]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:58 compute-0 sudo[61883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymwvbchvjehioercrrtrwizvnhteieou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089257.97869-395-112230132227805/AnsiballZ_stat.py'
Oct 10 09:40:58 compute-0 sudo[61883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:58 compute-0 python3.9[61885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:58 compute-0 sudo[61883]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:58 compute-0 sudo[61961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxferyryvneobqmexiinjyozkszkzure ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089257.97869-395-112230132227805/AnsiballZ_file.py'
Oct 10 09:40:58 compute-0 sudo[61961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:59 compute-0 python3.9[61963]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:59 compute-0 sudo[61961]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:59 compute-0 sudo[62113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voclelppiorveyptatmesctysbheiuur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089259.3864574-431-254120893699513/AnsiballZ_stat.py'
Oct 10 09:40:59 compute-0 sudo[62113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:59 compute-0 python3.9[62115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:59 compute-0 sudo[62113]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:00 compute-0 sudo[62191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cltjgmohdwkcnkaczbpdfjyhdwapgbyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089259.3864574-431-254120893699513/AnsiballZ_file.py'
Oct 10 09:41:00 compute-0 sudo[62191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:00 compute-0 python3.9[62193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:00 compute-0 sudo[62191]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:01 compute-0 sudo[62343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkquqaxcayyzsyrfgzgowxokwjfjvrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089260.7878337-467-44761902109859/AnsiballZ_systemd.py'
Oct 10 09:41:01 compute-0 sudo[62343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:01 compute-0 python3.9[62345]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:41:01 compute-0 systemd[1]: Reloading.
Oct 10 09:41:01 compute-0 systemd-rc-local-generator[62372]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:41:01 compute-0 systemd-sysv-generator[62376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:41:01 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 09:41:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:41:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:41:01 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 09:41:01 compute-0 sudo[62343]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:02 compute-0 python3.9[62538]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:41:02 compute-0 network[62555]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:41:02 compute-0 network[62556]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:41:02 compute-0 network[62557]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:41:07 compute-0 sudo[62818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcobdznimqiokhkjtxfqnnsagdkszhby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089267.2069376-545-37247251380508/AnsiballZ_stat.py'
Oct 10 09:41:07 compute-0 sudo[62818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:07 compute-0 python3.9[62820]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:07 compute-0 sudo[62818]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:08 compute-0 sudo[62896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzxqzahjulygkrtvhridiydndbzbzcqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089267.2069376-545-37247251380508/AnsiballZ_file.py'
Oct 10 09:41:08 compute-0 sudo[62896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:08 compute-0 python3.9[62898]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:08 compute-0 sudo[62896]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:08 compute-0 sudo[63048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyqqrxfwwtkcxtxpeutddqzrykxiltro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089268.692762-584-76477644034088/AnsiballZ_file.py'
Oct 10 09:41:08 compute-0 sudo[63048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:09 compute-0 python3.9[63050]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:09 compute-0 sudo[63048]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:09 compute-0 sudo[63200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzshuzygoxiwpopqwgncnmdremapomj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089269.477482-608-267474013336497/AnsiballZ_stat.py'
Oct 10 09:41:09 compute-0 sudo[63200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:09 compute-0 python3.9[63202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:09 compute-0 sudo[63200]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:10 compute-0 sudo[63323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubtngsadxbvmlibgzutmmwvadkyjvuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089269.477482-608-267474013336497/AnsiballZ_copy.py'
Oct 10 09:41:10 compute-0 sudo[63323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:10 compute-0 python3.9[63325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089269.477482-608-267474013336497/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:10 compute-0 sudo[63323]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:11 compute-0 sudo[63475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmtfttdtzantdazfnxfaekbozymjyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089271.1140652-662-195466785401456/AnsiballZ_timezone.py'
Oct 10 09:41:11 compute-0 sudo[63475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:11 compute-0 python3.9[63477]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:41:11 compute-0 systemd[1]: Starting Time & Date Service...
Oct 10 09:41:11 compute-0 systemd[1]: Started Time & Date Service.
Oct 10 09:41:11 compute-0 sudo[63475]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:12 compute-0 sudo[63631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdtqsomdnzmdmpvqviuzylwluxuiwmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089272.5924306-689-185952113362960/AnsiballZ_file.py'
Oct 10 09:41:12 compute-0 sudo[63631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:13 compute-0 python3.9[63633]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:13 compute-0 sudo[63631]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:13 compute-0 sudo[63783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoyztcpalrqurnpdqkokzkpnuchkrebw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089273.3436253-713-77739569557506/AnsiballZ_stat.py'
Oct 10 09:41:13 compute-0 sudo[63783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:13 compute-0 python3.9[63785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:13 compute-0 sudo[63783]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:14 compute-0 sudo[63906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxlcrphhdxzdzlncmcavccxmlrzojto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089273.3436253-713-77739569557506/AnsiballZ_copy.py'
Oct 10 09:41:14 compute-0 sudo[63906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:14 compute-0 python3.9[63908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089273.3436253-713-77739569557506/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:14 compute-0 sudo[63906]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:15 compute-0 sudo[64058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuaaxnswxmziaqdawlsrhrpxvwjpebdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089274.895794-758-218813698397318/AnsiballZ_stat.py'
Oct 10 09:41:15 compute-0 sudo[64058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:15 compute-0 python3.9[64060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:15 compute-0 sudo[64058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:15 compute-0 sudo[64181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymcmlcztpbarjkurmmgmqnphgemxepxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089274.895794-758-218813698397318/AnsiballZ_copy.py'
Oct 10 09:41:15 compute-0 sudo[64181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:16 compute-0 python3.9[64183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089274.895794-758-218813698397318/.source.yaml _original_basename=.cd8xvwwy follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:16 compute-0 sudo[64181]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:16 compute-0 sudo[64333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofeevkfauhzpytwdqbhykntrajqiphcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089276.2935424-803-257794442691262/AnsiballZ_stat.py'
Oct 10 09:41:16 compute-0 sudo[64333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:16 compute-0 python3.9[64335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:16 compute-0 sudo[64333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:17 compute-0 sudo[64456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzopfbmbydlmggynbzewjyddsdhcjaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089276.2935424-803-257794442691262/AnsiballZ_copy.py'
Oct 10 09:41:17 compute-0 sudo[64456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:17 compute-0 python3.9[64458]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089276.2935424-803-257794442691262/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:17 compute-0 sudo[64456]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:18 compute-0 sudo[64608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufluqmqssggbrulzmdspymipotbafhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089277.8068194-848-135634826030306/AnsiballZ_command.py'
Oct 10 09:41:18 compute-0 sudo[64608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:18 compute-0 python3.9[64610]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:18 compute-0 sudo[64608]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:19 compute-0 sudo[64761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwuihbkwdxbybqbfdqbvitkslhrkvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089278.789633-872-118791219016401/AnsiballZ_command.py'
Oct 10 09:41:19 compute-0 sudo[64761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:19 compute-0 python3.9[64763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:19 compute-0 sudo[64761]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:20 compute-0 sudo[64914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhutaburvunjswozvxipvgkcmrqxeop ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089279.7078044-896-25020053902533/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:41:20 compute-0 sudo[64914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:20 compute-0 python3[64916]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:41:20 compute-0 sudo[64914]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:21 compute-0 sudo[65066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtipfgvvqximhxzkzvquecgacwfsvae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089280.800193-920-49191907538027/AnsiballZ_stat.py'
Oct 10 09:41:21 compute-0 sudo[65066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:21 compute-0 python3.9[65068]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:21 compute-0 sudo[65066]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:21 compute-0 sudo[65189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dipekpjdpccokrdtydyimwswxnlxjthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089280.800193-920-49191907538027/AnsiballZ_copy.py'
Oct 10 09:41:21 compute-0 sudo[65189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:21 compute-0 python3.9[65191]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089280.800193-920-49191907538027/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:21 compute-0 sudo[65189]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:22 compute-0 sudo[65341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwichqbddwchfautlkqtaxwpinqtqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089282.3948953-965-129559390414652/AnsiballZ_stat.py'
Oct 10 09:41:22 compute-0 sudo[65341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:22 compute-0 python3.9[65343]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:22 compute-0 sudo[65341]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:23 compute-0 sudo[65464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngkvzwjxehutmkvwspscdhfqbpcbrkjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089282.3948953-965-129559390414652/AnsiballZ_copy.py'
Oct 10 09:41:23 compute-0 sudo[65464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:23 compute-0 python3.9[65466]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089282.3948953-965-129559390414652/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:23 compute-0 sudo[65464]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:24 compute-0 sudo[65616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvskqdnhkgxrsbkpvnejzutidydpplhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089283.919038-1010-84820094216277/AnsiballZ_stat.py'
Oct 10 09:41:24 compute-0 sudo[65616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:24 compute-0 python3.9[65618]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:24 compute-0 sudo[65616]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:24 compute-0 sudo[65739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmeeqwuvwzufymwfpqzzylqvulvyxaju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089283.919038-1010-84820094216277/AnsiballZ_copy.py'
Oct 10 09:41:24 compute-0 sudo[65739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:25 compute-0 python3.9[65741]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089283.919038-1010-84820094216277/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:25 compute-0 sudo[65739]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:25 compute-0 sudo[65891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmrrvkpvvfyqedjvalyhotkoahpbvcdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089285.5470479-1055-61617774213311/AnsiballZ_stat.py'
Oct 10 09:41:25 compute-0 sudo[65891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:26 compute-0 python3.9[65893]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:26 compute-0 sudo[65891]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:26 compute-0 chronyd[54267]: Selected source 142.4.192.253 (pool.ntp.org)
Oct 10 09:41:26 compute-0 sudo[66014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xregwuhqfgoegclfalhdovrbwodlcnyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089285.5470479-1055-61617774213311/AnsiballZ_copy.py'
Oct 10 09:41:26 compute-0 sudo[66014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:26 compute-0 python3.9[66016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089285.5470479-1055-61617774213311/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:26 compute-0 sudo[66014]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:27 compute-0 sudo[66166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiaigukplbisrsiymmhhangfawdwihpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089287.0381684-1100-147859861945733/AnsiballZ_stat.py'
Oct 10 09:41:27 compute-0 sudo[66166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:27 compute-0 python3.9[66168]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:27 compute-0 sudo[66166]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:27 compute-0 sudo[66289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbbqgujasodkhloonvufaqfxlquaudkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089287.0381684-1100-147859861945733/AnsiballZ_copy.py'
Oct 10 09:41:27 compute-0 sudo[66289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:28 compute-0 python3.9[66291]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089287.0381684-1100-147859861945733/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:28 compute-0 sudo[66289]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:28 compute-0 sudo[66441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbcdlpvdnzpbckvuwkpiougzfpmppont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089288.6740782-1145-9546298267102/AnsiballZ_file.py'
Oct 10 09:41:28 compute-0 sudo[66441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:29 compute-0 python3.9[66443]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:29 compute-0 sudo[66441]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:29 compute-0 sudo[66593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcopxxrvwapahfwozgxhbinzdcfmqkew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089289.5136065-1169-262295175427467/AnsiballZ_command.py'
Oct 10 09:41:29 compute-0 sudo[66593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:30 compute-0 python3.9[66595]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:30 compute-0 sudo[66593]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:31 compute-0 sudo[66752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ateejagsywqiozctknzywyunatwyhbik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089290.4937906-1193-45828120596212/AnsiballZ_blockinfile.py'
Oct 10 09:41:31 compute-0 sudo[66752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:31 compute-0 python3.9[66754]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:31 compute-0 sudo[66752]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:31 compute-0 sudo[66905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfwprbokxpvcgivyfpbkjfynlpdxwrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089291.6169767-1220-113773440564851/AnsiballZ_file.py'
Oct 10 09:41:31 compute-0 sudo[66905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:32 compute-0 python3.9[66907]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:32 compute-0 sudo[66905]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:32 compute-0 sudo[67057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfgywtydkrafoaieminffrdzglstpvva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089292.29634-1220-72418272149338/AnsiballZ_file.py'
Oct 10 09:41:32 compute-0 sudo[67057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:32 compute-0 python3.9[67059]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:32 compute-0 sudo[67057]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:33 compute-0 sudo[67209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fekzdwylegbncyibidfbbfeaadeqbjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089293.2444997-1265-73254072816997/AnsiballZ_mount.py'
Oct 10 09:41:33 compute-0 sudo[67209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:33 compute-0 python3.9[67211]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:41:34 compute-0 sudo[67209]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:34 compute-0 sudo[67362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnpmunywczjcvutgodxjrrvjhvjyjvik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089294.1273825-1265-207965732543461/AnsiballZ_mount.py'
Oct 10 09:41:34 compute-0 sudo[67362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:34 compute-0 python3.9[67364]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:41:34 compute-0 sudo[67362]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:35 compute-0 sshd-session[59529]: Connection closed by 192.168.122.30 port 51934
Oct 10 09:41:35 compute-0 sshd-session[59526]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:41:35 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 10 09:41:35 compute-0 systemd[1]: session-15.scope: Consumed 33.930s CPU time.
Oct 10 09:41:35 compute-0 systemd-logind[806]: Session 15 logged out. Waiting for processes to exit.
Oct 10 09:41:35 compute-0 systemd-logind[806]: Removed session 15.
Oct 10 09:41:41 compute-0 sshd-session[67390]: Accepted publickey for zuul from 192.168.122.30 port 37356 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:41:41 compute-0 systemd-logind[806]: New session 16 of user zuul.
Oct 10 09:41:41 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 10 09:41:41 compute-0 sshd-session[67390]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:41:41 compute-0 sudo[67543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzagmisldezgvagtwdsnuuwpewycifyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089301.3681104-18-108116493586475/AnsiballZ_tempfile.py'
Oct 10 09:41:41 compute-0 sudo[67543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:41 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:41:42 compute-0 python3.9[67545]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 09:41:42 compute-0 sudo[67543]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:42 compute-0 sudo[67698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-perlnqifwoabpskfhatxwsltgvcywgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089302.4054444-54-194622407708507/AnsiballZ_stat.py'
Oct 10 09:41:42 compute-0 sudo[67698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:43 compute-0 python3.9[67700]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:41:43 compute-0 sudo[67698]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:44 compute-0 sudo[67850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqqutrjwqocegfrlyjahkcxjvmwgvymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089303.3835566-84-54906469179238/AnsiballZ_setup.py'
Oct 10 09:41:44 compute-0 sudo[67850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:44 compute-0 python3.9[67852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:41:44 compute-0 sudo[67850]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:45 compute-0 sudo[68002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chmnvipwhvhhktchwxkrjesfrlnfeqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089304.6389358-109-141966905517591/AnsiballZ_blockinfile.py'
Oct 10 09:41:45 compute-0 sudo[68002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:45 compute-0 python3.9[68004]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=
                                             create=True mode=0644 path=/tmp/ansible.8j84gmzr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:45 compute-0 sudo[68002]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:46 compute-0 sudo[68154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glchewgenzbxflfxltknhqsnppctzrrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089305.562294-133-24141986037103/AnsiballZ_command.py'
Oct 10 09:41:46 compute-0 sudo[68154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:46 compute-0 python3.9[68156]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8j84gmzr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:46 compute-0 sudo[68154]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:46 compute-0 sudo[68308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geaidjdcpilztvgdpiorfxfkkoudxuvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089306.5164156-157-90847624663553/AnsiballZ_file.py'
Oct 10 09:41:46 compute-0 sudo[68308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:47 compute-0 python3.9[68310]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8j84gmzr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:47 compute-0 sudo[68308]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:47 compute-0 sshd-session[67393]: Connection closed by 192.168.122.30 port 37356
Oct 10 09:41:47 compute-0 sshd-session[67390]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:41:47 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 10 09:41:47 compute-0 systemd[1]: session-16.scope: Consumed 3.806s CPU time.
Oct 10 09:41:47 compute-0 systemd-logind[806]: Session 16 logged out. Waiting for processes to exit.
Oct 10 09:41:47 compute-0 systemd-logind[806]: Removed session 16.
Oct 10 09:41:53 compute-0 sshd-session[68335]: Accepted publickey for zuul from 192.168.122.30 port 59286 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:41:53 compute-0 systemd-logind[806]: New session 17 of user zuul.
Oct 10 09:41:53 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 10 09:41:53 compute-0 sshd-session[68335]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:41:54 compute-0 python3.9[68488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:41:55 compute-0 sudo[68642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anaogkcwlmptyypmtocrowveolqmfmnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089314.647696-56-189856223863519/AnsiballZ_systemd.py'
Oct 10 09:41:55 compute-0 sudo[68642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:55 compute-0 python3.9[68644]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 09:41:55 compute-0 sudo[68642]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:56 compute-0 sudo[68796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfuihnuixppohjawqvfeiiobufivvjct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089315.952169-80-13512078970044/AnsiballZ_systemd.py'
Oct 10 09:41:56 compute-0 sudo[68796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:56 compute-0 python3.9[68798]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:41:56 compute-0 sudo[68796]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:57 compute-0 sudo[68949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oieabrchbjqnvdhqlzkpplixbfengjwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089316.8904328-107-247959568767583/AnsiballZ_command.py'
Oct 10 09:41:57 compute-0 sudo[68949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:57 compute-0 python3.9[68951]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:57 compute-0 sudo[68949]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:58 compute-0 sudo[69102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nibnjrwdugfbwirepvqmpwwraacpfjpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089317.882311-131-104905552197142/AnsiballZ_stat.py'
Oct 10 09:41:58 compute-0 sudo[69102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:58 compute-0 python3.9[69104]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:41:58 compute-0 sudo[69102]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:59 compute-0 sudo[69256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmwnmlboupfecdfddnvbrdnwqidgyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089319.001288-155-164039753869736/AnsiballZ_command.py'
Oct 10 09:41:59 compute-0 sudo[69256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:59 compute-0 python3.9[69258]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:59 compute-0 sudo[69256]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:00 compute-0 sudo[69411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaineunqzjaekclykomrwluogptrklle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089319.8322835-179-48856492440439/AnsiballZ_file.py'
Oct 10 09:42:00 compute-0 sudo[69411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:00 compute-0 python3.9[69413]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:00 compute-0 sudo[69411]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:01 compute-0 sshd-session[68338]: Connection closed by 192.168.122.30 port 59286
Oct 10 09:42:01 compute-0 sshd-session[68335]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:42:01 compute-0 systemd-logind[806]: Session 17 logged out. Waiting for processes to exit.
Oct 10 09:42:01 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 10 09:42:01 compute-0 systemd[1]: session-17.scope: Consumed 4.931s CPU time.
Oct 10 09:42:01 compute-0 systemd-logind[806]: Removed session 17.
Oct 10 09:42:06 compute-0 sshd-session[69438]: Accepted publickey for zuul from 192.168.122.30 port 55110 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:42:06 compute-0 systemd-logind[806]: New session 18 of user zuul.
Oct 10 09:42:06 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 10 09:42:06 compute-0 sshd-session[69438]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:42:07 compute-0 python3.9[69591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:42:08 compute-0 sudo[69745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckfbkkoqpurkvboidmlkgpzeylswegt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089328.2970734-62-92897336633509/AnsiballZ_setup.py'
Oct 10 09:42:08 compute-0 sudo[69745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:08 compute-0 python3.9[69747]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:42:09 compute-0 sudo[69745]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:09 compute-0 sudo[69829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpwtdfemufgaahpkoxhbeuyivgoviwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089328.2970734-62-92897336633509/AnsiballZ_dnf.py'
Oct 10 09:42:09 compute-0 sudo[69829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:09 compute-0 python3.9[69831]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:42:11 compute-0 sudo[69829]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:11 compute-0 python3.9[69982]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:13 compute-0 python3.9[70133]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:42:14 compute-0 python3.9[70283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:14 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:42:14 compute-0 python3.9[70434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:15 compute-0 sshd-session[69441]: Connection closed by 192.168.122.30 port 55110
Oct 10 09:42:15 compute-0 sshd-session[69438]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:42:15 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 10 09:42:15 compute-0 systemd[1]: session-18.scope: Consumed 6.125s CPU time.
Oct 10 09:42:15 compute-0 systemd-logind[806]: Session 18 logged out. Waiting for processes to exit.
Oct 10 09:42:15 compute-0 systemd-logind[806]: Removed session 18.
Oct 10 09:42:23 compute-0 sshd-session[70459]: Accepted publickey for zuul from 38.102.83.82 port 38382 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:42:23 compute-0 systemd-logind[806]: New session 19 of user zuul.
Oct 10 09:42:23 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 10 09:42:23 compute-0 sshd-session[70459]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:42:23 compute-0 sudo[70535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjpepwhfmvstwfbrosagjqzjhevfazo ; /usr/bin/python3'
Oct 10 09:42:23 compute-0 sudo[70535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:23 compute-0 useradd[70539]: new group: name=ceph-admin, GID=42478
Oct 10 09:42:23 compute-0 useradd[70539]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 10 09:42:23 compute-0 sudo[70535]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:24 compute-0 sudo[70621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtkwyfzraigaruunpbfjlpkjdcwacrzg ; /usr/bin/python3'
Oct 10 09:42:24 compute-0 sudo[70621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:24 compute-0 sudo[70621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:24 compute-0 sudo[70694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfcfbuefujndzonvvcilgqseniebagc ; /usr/bin/python3'
Oct 10 09:42:24 compute-0 sudo[70694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:24 compute-0 sudo[70694]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:25 compute-0 sudo[70744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fociekuvqzibgfspygfvvzuittjgxibc ; /usr/bin/python3'
Oct 10 09:42:25 compute-0 sudo[70744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:25 compute-0 sudo[70744]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:25 compute-0 sudo[70770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgpankhpudrwqquepmznmvnoigogbhk ; /usr/bin/python3'
Oct 10 09:42:25 compute-0 sudo[70770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:25 compute-0 sudo[70770]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:25 compute-0 sudo[70796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssppzcogjrkkacfrouzqprqnhbdpxyx ; /usr/bin/python3'
Oct 10 09:42:25 compute-0 sudo[70796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:26 compute-0 sudo[70796]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:26 compute-0 sudo[70822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiztdtbxaeqcridfjleazqbhlcsgftvh ; /usr/bin/python3'
Oct 10 09:42:26 compute-0 sudo[70822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:26 compute-0 sudo[70822]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:27 compute-0 sudo[70900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elglvcwmgynpmcdcbqxdxpdpvoftfsml ; /usr/bin/python3'
Oct 10 09:42:27 compute-0 sudo[70900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:27 compute-0 sudo[70900]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:27 compute-0 sudo[70973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpomznltagzytmzlnnkkznjlajizncmd ; /usr/bin/python3'
Oct 10 09:42:27 compute-0 sudo[70973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:27 compute-0 sudo[70973]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:28 compute-0 sudo[71075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtemahgoowrakivbqubivxbawdrwfiv ; /usr/bin/python3'
Oct 10 09:42:28 compute-0 sudo[71075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:28 compute-0 sudo[71075]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:28 compute-0 sudo[71148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywzkxmwmuqmqsssceizwoumndftttmcz ; /usr/bin/python3'
Oct 10 09:42:28 compute-0 sudo[71148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:28 compute-0 sudo[71148]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:29 compute-0 sudo[71198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehbmcyzpnrptdiljkdtpweckcgyzmgrf ; /usr/bin/python3'
Oct 10 09:42:29 compute-0 sudo[71198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:29 compute-0 python3[71200]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:42:30 compute-0 sudo[71198]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:31 compute-0 sudo[71293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrfdmknpaztyayjumilglixwrotclbi ; /usr/bin/python3'
Oct 10 09:42:31 compute-0 sudo[71293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:31 compute-0 python3[71295]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:42:32 compute-0 sudo[71293]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-0 sudo[71320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqdpntnjmrsoqcyyuiomwkjgyolwfkew ; /usr/bin/python3'
Oct 10 09:42:33 compute-0 sudo[71320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:33 compute-0 python3[71322]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:33 compute-0 sudo[71320]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-0 sudo[71346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqampqclqvrodhcdzbfpxiyfvomgcbq ; /usr/bin/python3'
Oct 10 09:42:33 compute-0 sudo[71346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:33 compute-0 python3[71348]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:33 compute-0 kernel: loop: module loaded
Oct 10 09:42:33 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 10 09:42:33 compute-0 sudo[71346]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-0 sudo[71380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsirzespwbkqcwuclrjwefdbelgdcvuo ; /usr/bin/python3'
Oct 10 09:42:33 compute-0 sudo[71380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:34 compute-0 python3[71382]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:34 compute-0 lvm[71385]: PV /dev/loop3 not used.
Oct 10 09:42:34 compute-0 lvm[71394]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:42:34 compute-0 sudo[71380]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:34 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 10 09:42:34 compute-0 lvm[71396]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 10 09:42:34 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 10 09:42:34 compute-0 sudo[71472]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqcdjbuwogvqbgjvtxaokezpesyargl ; /usr/bin/python3'
Oct 10 09:42:34 compute-0 sudo[71472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:34 compute-0 python3[71474]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:42:34 compute-0 sudo[71472]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:35 compute-0 sudo[71545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpfwtklwcksugohuxkesjkatlxqreym ; /usr/bin/python3'
Oct 10 09:42:35 compute-0 sudo[71545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:35 compute-0 python3[71547]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089354.5849364-33483-94804192810284/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:35 compute-0 sudo[71545]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:35 compute-0 sudo[71595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxlbleagfqoccfczkbxeameyykxqwoez ; /usr/bin/python3'
Oct 10 09:42:35 compute-0 sudo[71595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:36 compute-0 python3[71597]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:42:36 compute-0 systemd[1]: Reloading.
Oct 10 09:42:36 compute-0 systemd-sysv-generator[71630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:42:36 compute-0 systemd-rc-local-generator[71627]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:42:36 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 10 09:42:36 compute-0 bash[71637]: /dev/loop3: [64513]:4555204 (/var/lib/ceph-osd-0.img)
Oct 10 09:42:36 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 10 09:42:36 compute-0 sudo[71595]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:36 compute-0 lvm[71638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:42:36 compute-0 lvm[71638]: VG ceph_vg0 finished
Oct 10 09:42:38 compute-0 python3[71662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:42:41 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sclsallikevoaqvlzhdosmdkukwavjza ; /usr/bin/python3'
Oct 10 09:42:41 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:41 compute-0 python3[71755]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:42:44 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:44 compute-0 sudo[71810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-falrdsjkakmvolooeoizvufilwzziyvr ; /usr/bin/python3'
Oct 10 09:42:44 compute-0 sudo[71810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:44 compute-0 python3[71812]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:42:47 compute-0 groupadd[71823]: group added to /etc/group: name=cephadm, GID=992
Oct 10 09:42:47 compute-0 groupadd[71823]: group added to /etc/gshadow: name=cephadm
Oct 10 09:42:47 compute-0 groupadd[71823]: new group: name=cephadm, GID=992
Oct 10 09:42:47 compute-0 useradd[71830]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 10 09:42:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:42:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:42:48 compute-0 sudo[71810]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:48 compute-0 sudo[71929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuhsoteqcqksledreqggdositwqplfwf ; /usr/bin/python3'
Oct 10 09:42:48 compute-0 sudo[71929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:42:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:42:48 compute-0 systemd[1]: run-r9a1b0430caa848019ef16f9c0fda4e3c.service: Deactivated successfully.
Oct 10 09:42:48 compute-0 python3[71932]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:48 compute-0 sudo[71929]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:48 compute-0 sudo[71958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iguwqoaozumhmzqkdmpafwmpmjggfxhy ; /usr/bin/python3'
Oct 10 09:42:48 compute-0 sudo[71958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:48 compute-0 python3[71960]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:42:49 compute-0 sudo[71958]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:49 compute-0 sudo[72022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdaiztqtgibzrqwfvfethxiesugumbq ; /usr/bin/python3'
Oct 10 09:42:49 compute-0 sudo[72022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:49 compute-0 python3[72024]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:49 compute-0 sudo[72022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:50 compute-0 sudo[72048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmnqmdhnluptggysqpdsfnsehydeczeh ; /usr/bin/python3'
Oct 10 09:42:50 compute-0 sudo[72048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:42:50 compute-0 python3[72050]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:50 compute-0 sudo[72048]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:50 compute-0 sudo[72126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzlsdpxpcryjqddnuxpnjukuecefwxg ; /usr/bin/python3'
Oct 10 09:42:50 compute-0 sudo[72126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:50 compute-0 python3[72128]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:42:50 compute-0 sudo[72126]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:51 compute-0 sudo[72199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxyoncwlbgcmehzwueksfzgcnvuekbrc ; /usr/bin/python3'
Oct 10 09:42:51 compute-0 sudo[72199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:51 compute-0 python3[72201]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089370.5770748-33675-214147731173333/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:51 compute-0 sudo[72199]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:52 compute-0 sudo[72301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiikdkfwpijcgdxualzmnzexnynpehdx ; /usr/bin/python3'
Oct 10 09:42:52 compute-0 sudo[72301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:52 compute-0 python3[72303]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:42:52 compute-0 sudo[72301]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:52 compute-0 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwdfwdjuhkygqvmviyrzlsohcvnanua ; /usr/bin/python3'
Oct 10 09:42:52 compute-0 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:52 compute-0 python3[72376]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089371.870363-33693-218466848877947/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:52 compute-0 sudo[72374]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:52 compute-0 sudo[72424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daepdbgdcivttstlpluzybaurlgawwwi ; /usr/bin/python3'
Oct 10 09:42:52 compute-0 sudo[72424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:52 compute-0 python3[72426]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:53 compute-0 sudo[72424]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:53 compute-0 sudo[72452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmsllkwskhkpuxuaifucfmpazxrcqye ; /usr/bin/python3'
Oct 10 09:42:53 compute-0 sudo[72452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:53 compute-0 python3[72454]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:53 compute-0 sudo[72452]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:53 compute-0 sudo[72480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvcxyykhtxaskqgutwjtzycjwsvlmsft ; /usr/bin/python3'
Oct 10 09:42:53 compute-0 sudo[72480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:53 compute-0 python3[72482]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:53 compute-0 sudo[72480]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:53 compute-0 sudo[72508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukialxlmwbdodhdbbcxhotgflyriqge ; /usr/bin/python3'
Oct 10 09:42:53 compute-0 sudo[72508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:54 compute-0 python3[72510]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:54 compute-0 sshd-session[72514]: Accepted publickey for ceph-admin from 192.168.122.100 port 59358 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:42:54 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 09:42:54 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 09:42:54 compute-0 systemd-logind[806]: New session 20 of user ceph-admin.
Oct 10 09:42:54 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 09:42:54 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 10 09:42:54 compute-0 systemd[72518]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:42:54 compute-0 systemd[72518]: Queued start job for default target Main User Target.
Oct 10 09:42:54 compute-0 systemd[72518]: Created slice User Application Slice.
Oct 10 09:42:54 compute-0 systemd[72518]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:42:54 compute-0 systemd[72518]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:42:54 compute-0 systemd[72518]: Reached target Paths.
Oct 10 09:42:54 compute-0 systemd[72518]: Reached target Timers.
Oct 10 09:42:54 compute-0 systemd[72518]: Starting D-Bus User Message Bus Socket...
Oct 10 09:42:54 compute-0 systemd[72518]: Starting Create User's Volatile Files and Directories...
Oct 10 09:42:54 compute-0 systemd[72518]: Finished Create User's Volatile Files and Directories.
Oct 10 09:42:54 compute-0 systemd[72518]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:42:54 compute-0 systemd[72518]: Reached target Sockets.
Oct 10 09:42:54 compute-0 systemd[72518]: Reached target Basic System.
Oct 10 09:42:54 compute-0 systemd[72518]: Reached target Main User Target.
Oct 10 09:42:54 compute-0 systemd[72518]: Startup finished in 123ms.
Oct 10 09:42:54 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 10 09:42:54 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct 10 09:42:54 compute-0 sshd-session[72514]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:42:54 compute-0 sudo[72535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 10 09:42:54 compute-0 sudo[72535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:42:54 compute-0 sudo[72535]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:54 compute-0 sshd-session[72534]: Received disconnect from 192.168.122.100 port 59358:11: disconnected by user
Oct 10 09:42:54 compute-0 sshd-session[72534]: Disconnected from user ceph-admin 192.168.122.100 port 59358
Oct 10 09:42:54 compute-0 sshd-session[72514]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:42:54 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 10 09:42:54 compute-0 systemd-logind[806]: Session 20 logged out. Waiting for processes to exit.
Oct 10 09:42:54 compute-0 systemd-logind[806]: Removed session 20.
Oct 10 09:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2636107177-lower\x2dmapped.mount: Deactivated successfully.
Oct 10 09:43:04 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 10 09:43:04 compute-0 systemd[72518]: Activating special unit Exit the Session...
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped target Main User Target.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped target Basic System.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped target Paths.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped target Sockets.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped target Timers.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 09:43:04 compute-0 systemd[72518]: Closed D-Bus User Message Bus Socket.
Oct 10 09:43:04 compute-0 systemd[72518]: Stopped Create User's Volatile Files and Directories.
Oct 10 09:43:04 compute-0 systemd[72518]: Removed slice User Application Slice.
Oct 10 09:43:04 compute-0 systemd[72518]: Reached target Shutdown.
Oct 10 09:43:04 compute-0 systemd[72518]: Finished Exit the Session.
Oct 10 09:43:04 compute-0 systemd[72518]: Reached target Exit the Session.
Oct 10 09:43:04 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 10 09:43:04 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 10 09:43:04 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 10 09:43:04 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 10 09:43:04 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 10 09:43:04 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 10 09:43:04 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 10 09:43:11 compute-0 podman[72613]: 2025-10-10 09:43:11.745863155 +0000 UTC m=+16.703470366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:11 compute-0 podman[72674]: 2025-10-10 09:43:11.84549504 +0000 UTC m=+0.068852460 container create efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:43:11 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 10 09:43:11 compute-0 systemd[1]: Started libpod-conmon-efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd.scope.
Oct 10 09:43:11 compute-0 podman[72674]: 2025-10-10 09:43:11.818485723 +0000 UTC m=+0.041843243 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:11 compute-0 podman[72674]: 2025-10-10 09:43:11.9449754 +0000 UTC m=+0.168332870 container init efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:11 compute-0 podman[72674]: 2025-10-10 09:43:11.955170917 +0000 UTC m=+0.178528387 container start efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:43:11 compute-0 podman[72674]: 2025-10-10 09:43:11.959352468 +0000 UTC m=+0.182710008 container attach efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 09:43:12 compute-0 relaxed_chaplygin[72690]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 10 09:43:12 compute-0 systemd[1]: libpod-efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72674]: 2025-10-10 09:43:12.080127762 +0000 UTC m=+0.303485212 container died efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d9a8856b920b5fae6dcb09b32f653289e3a43162e83f9f2fa2da34998ce31cb-merged.mount: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72674]: 2025-10-10 09:43:12.127410958 +0000 UTC m=+0.350768428 container remove efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd (image=quay.io/ceph/ceph:v19, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:43:12 compute-0 systemd[1]: libpod-conmon-efb4e3e2371eb7f8e4937624140c832bb344d6888fceaecefb582c00f8ba69fd.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.219277971 +0000 UTC m=+0.060359383 container create 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:12 compute-0 systemd[1]: Started libpod-conmon-649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d.scope.
Oct 10 09:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.198902488 +0000 UTC m=+0.039983920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.291723322 +0000 UTC m=+0.132804784 container init 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.301227925 +0000 UTC m=+0.142309327 container start 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.305664336 +0000 UTC m=+0.146745818 container attach 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:43:12 compute-0 ecstatic_margulis[72724]: 167 167
Oct 10 09:43:12 compute-0 systemd[1]: libpod-649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.309268808 +0000 UTC m=+0.150350230 container died 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:12 compute-0 podman[72706]: 2025-10-10 09:43:12.350771079 +0000 UTC m=+0.191852501 container remove 649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d (image=quay.io/ceph/ceph:v19, name=ecstatic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:12 compute-0 systemd[1]: libpod-conmon-649bd5c44c8ce5d32808c8bb8dabcddcfcd10b2cb130a14a293d7c79e160797d.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.444762342 +0000 UTC m=+0.064178812 container create 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:12 compute-0 systemd[1]: Started libpod-conmon-7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f.scope.
Oct 10 09:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.417705473 +0000 UTC m=+0.037121993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.513398303 +0000 UTC m=+0.132814763 container init 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.521896332 +0000 UTC m=+0.141312802 container start 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.525192105 +0000 UTC m=+0.144608565 container attach 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 09:43:12 compute-0 goofy_grothendieck[72756]: AQAw1eho7cY1IRAAvayTKipasMWKQf9/v16ppQ==
Oct 10 09:43:12 compute-0 systemd[1]: libpod-7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.563287639 +0000 UTC m=+0.182704119 container died 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:12 compute-0 podman[72740]: 2025-10-10 09:43:12.601782107 +0000 UTC m=+0.221198547 container remove 7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:12 compute-0 systemd[1]: libpod-conmon-7744bdeb356ea7058ffc6b7e3d5bd33241edcf9ea124bcdc21fcd1821f51fe8f.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.679755706 +0000 UTC m=+0.054779352 container create ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:43:12 compute-0 systemd[1]: Started libpod-conmon-ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c.scope.
Oct 10 09:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.741687941 +0000 UTC m=+0.116711607 container init ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.65368232 +0000 UTC m=+0.028706006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.749005709 +0000 UTC m=+0.124029335 container start ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.752413765 +0000 UTC m=+0.127437431 container attach ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:43:12 compute-0 brave_brown[72790]: AQAw1ehopNneLRAADl+dEXdZmBZLo/Xy5g7X5Q==
Oct 10 09:43:12 compute-0 systemd[1]: libpod-ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.773495602 +0000 UTC m=+0.148519308 container died ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1faeb53f1c2397e436464696790b03265ec3f7f743cb41d3662531cc44c7c2e-merged.mount: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72773]: 2025-10-10 09:43:12.820556741 +0000 UTC m=+0.195580397 container remove ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c (image=quay.io/ceph/ceph:v19, name=brave_brown, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:12 compute-0 systemd[1]: libpod-conmon-ca10366be0f8e2a5acda9a2b0ea52576fa2941652f34102a8f6d3ba77eaaf69c.scope: Deactivated successfully.
Oct 10 09:43:12 compute-0 podman[72809]: 2025-10-10 09:43:12.887898089 +0000 UTC m=+0.040223478 container create 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:43:12 compute-0 systemd[1]: Started libpod-conmon-32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba.scope.
Oct 10 09:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:12 compute-0 podman[72809]: 2025-10-10 09:43:12.867581948 +0000 UTC m=+0.019907367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:12 compute-0 podman[72809]: 2025-10-10 09:43:12.969223452 +0000 UTC m=+0.121548851 container init 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:43:12 compute-0 podman[72809]: 2025-10-10 09:43:12.973862669 +0000 UTC m=+0.126188058 container start 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:43:12 compute-0 podman[72809]: 2025-10-10 09:43:12.977592286 +0000 UTC m=+0.129917725 container attach 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:43:13 compute-0 charming_curran[72827]: AQAx1ehon57fABAAiMZrUuMauqXU/9EkHIFvfw==
Oct 10 09:43:13 compute-0 systemd[1]: libpod-32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 podman[72809]: 2025-10-10 09:43:13.01949602 +0000 UTC m=+0.171821399 container died 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:13 compute-0 podman[72809]: 2025-10-10 09:43:13.063127082 +0000 UTC m=+0.215452501 container remove 32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba (image=quay.io/ceph/ceph:v19, name=charming_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:43:13 compute-0 systemd[1]: libpod-conmon-32f1352b3aaade5a450c851efa62e9250b5dd1042622b14088af4b0b8956a2ba.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.122413767 +0000 UTC m=+0.036524662 container create 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:13 compute-0 systemd[1]: Started libpod-conmon-2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676.scope.
Oct 10 09:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de23cb8cab403b8f65f788f413d4443323cb43b451829e6b5c20132126a1c08b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.180038284 +0000 UTC m=+0.094149189 container init 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.184935862 +0000 UTC m=+0.099046737 container start 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.187711476 +0000 UTC m=+0.101822351 container attach 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.106258308 +0000 UTC m=+0.020369183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:13 compute-0 jovial_volhard[72863]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 10 09:43:13 compute-0 jovial_volhard[72863]: setting min_mon_release = quincy
Oct 10 09:43:13 compute-0 jovial_volhard[72863]: /usr/bin/monmaptool: set fsid to 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:13 compute-0 jovial_volhard[72863]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 10 09:43:13 compute-0 systemd[1]: libpod-2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.238938546 +0000 UTC m=+0.153049421 container died 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:13 compute-0 podman[72846]: 2025-10-10 09:43:13.308501479 +0000 UTC m=+0.222612384 container remove 2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:43:13 compute-0 systemd[1]: libpod-conmon-2ff62db26ea5d1a5f20a3e98fd543cb6447a87e1734ec6dea6ac6e6883381676.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 podman[72882]: 2025-10-10 09:43:13.389068477 +0000 UTC m=+0.052979751 container create 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:13 compute-0 systemd[1]: Started libpod-conmon-05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531.scope.
Oct 10 09:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0906d2cd5940e610792cac5a5fe6a09a996daf0d9eadfb82bd3e3467fbeee21/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0906d2cd5940e610792cac5a5fe6a09a996daf0d9eadfb82bd3e3467fbeee21/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0906d2cd5940e610792cac5a5fe6a09a996daf0d9eadfb82bd3e3467fbeee21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:13 compute-0 podman[72882]: 2025-10-10 09:43:13.364289916 +0000 UTC m=+0.028201190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0906d2cd5940e610792cac5a5fe6a09a996daf0d9eadfb82bd3e3467fbeee21/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:13 compute-0 podman[72882]: 2025-10-10 09:43:13.47450159 +0000 UTC m=+0.138412874 container init 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:13 compute-0 podman[72882]: 2025-10-10 09:43:13.484554362 +0000 UTC m=+0.148465616 container start 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:13 compute-0 podman[72882]: 2025-10-10 09:43:13.491842229 +0000 UTC m=+0.155753513 container attach 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:43:13 compute-0 systemd[1]: libpod-05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 podman[72924]: 2025-10-10 09:43:13.622920423 +0000 UTC m=+0.034741401 container died 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:13 compute-0 podman[72924]: 2025-10-10 09:43:13.662227828 +0000 UTC m=+0.074048816 container remove 05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531 (image=quay.io/ceph/ceph:v19, name=ecstatic_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:13 compute-0 systemd[1]: libpod-conmon-05529d798f1a546b1b42edf29f435ac8c463f1f005a26416dbfd313bcb72d531.scope: Deactivated successfully.
Oct 10 09:43:13 compute-0 systemd[1]: Reloading.
Oct 10 09:43:13 compute-0 systemd-rc-local-generator[72972]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:13 compute-0 systemd-sysv-generator[72976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed68bc5a05868f3a1dcf51d877ff139c8d1b6004db47047d9c409b6dc11afdf5-merged.mount: Deactivated successfully.
Oct 10 09:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:14 compute-0 systemd[1]: Reloading.
Oct 10 09:43:14 compute-0 systemd-rc-local-generator[73009]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:14 compute-0 systemd-sysv-generator[73012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:14 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 10 09:43:14 compute-0 systemd[1]: Reloading.
Oct 10 09:43:14 compute-0 systemd-sysv-generator[73053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:14 compute-0 systemd-rc-local-generator[73047]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:14 compute-0 systemd[1]: Reached target Ceph cluster 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:14 compute-0 systemd[1]: Reloading.
Oct 10 09:43:14 compute-0 systemd-rc-local-generator[73080]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:14 compute-0 systemd-sysv-generator[73084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:14 compute-0 systemd[1]: Reloading.
Oct 10 09:43:14 compute-0 systemd-rc-local-generator[73123]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:15 compute-0 systemd-sysv-generator[73126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:15 compute-0 systemd[1]: Created slice Slice /system/ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:15 compute-0 systemd[1]: Reached target System Time Set.
Oct 10 09:43:15 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 10 09:43:15 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:15 compute-0 podman[73180]: 2025-10-10 09:43:15.527476395 +0000 UTC m=+0.063188978 container create d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:43:15 compute-0 podman[73180]: 2025-10-10 09:43:15.496756342 +0000 UTC m=+0.032468955 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1094c62279f8fef5d9e6e88c22f2df22720d7d6abf0721fe539c24176e4b18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1094c62279f8fef5d9e6e88c22f2df22720d7d6abf0721fe539c24176e4b18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1094c62279f8fef5d9e6e88c22f2df22720d7d6abf0721fe539c24176e4b18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1094c62279f8fef5d9e6e88c22f2df22720d7d6abf0721fe539c24176e4b18/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 podman[73180]: 2025-10-10 09:43:15.617704291 +0000 UTC m=+0.153416894 container init d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:15 compute-0 podman[73180]: 2025-10-10 09:43:15.623185258 +0000 UTC m=+0.158897801 container start d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:15 compute-0 bash[73180]: d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d
Oct 10 09:43:15 compute-0 systemd[1]: Started Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:15 compute-0 ceph-mon[73199]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: pidfile_write: ignore empty --pid-file
Oct 10 09:43:15 compute-0 ceph-mon[73199]: load: jerasure load: lrc 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Git sha 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: DB SUMMARY
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: DB Session ID:  7USIJ5A3AZ1KZD3572OG
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                                     Options.env: 0x55ad161fcc20
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                                Options.info_log: 0x55ad172b0d60
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                                 Options.wal_dir: 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                    Options.write_buffer_manager: 0x55ad172b5900
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                               Options.row_cache: None
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                              Options.wal_filter: None
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.wal_compression: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.max_background_jobs: 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Compression algorithms supported:
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kZSTD supported: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:           Options.merge_operator: 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:        Options.compaction_filter: None
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ad172b0500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ad172d5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.compression: NoCompression
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.num_levels: 7
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b11c2339-35ff-491c-b185-eda5e2ea0ba8
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089395688128, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089395691057, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "7USIJ5A3AZ1KZD3572OG", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089395691222, "job": 1, "event": "recovery_finished"}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ad172d6e00
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: DB pointer 0x55ad173e0000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:43:15 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ad172d5350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 09:43:15 compute-0 ceph-mon[73199]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@-1(???) e0 preinit fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 10 09:43:15 compute-0 ceph-mon[73199]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 10 09:43:15 compute-0 podman[73200]: 2025-10-10 09:43:15.721086074 +0000 UTC m=+0.055677163 container create 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 10 09:43:15 compute-0 ceph-mon[73199]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : last_changed 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : created 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864352,os=Linux}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).mds e1 new map
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-10-10T09:43:15:731413+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mkfs 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 10 09:43:15 compute-0 ceph-mon[73199]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 10 09:43:15 compute-0 ceph-mon[73199]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:15 compute-0 systemd[1]: Started libpod-conmon-2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765.scope.
Oct 10 09:43:15 compute-0 podman[73200]: 2025-10-10 09:43:15.697546434 +0000 UTC m=+0.032137563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e795ce24c0324d7f9c2f63d73d60da623c6cf287957f7ac00d5a77dad2dab49/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e795ce24c0324d7f9c2f63d73d60da623c6cf287957f7ac00d5a77dad2dab49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e795ce24c0324d7f9c2f63d73d60da623c6cf287957f7ac00d5a77dad2dab49/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:15 compute-0 podman[73200]: 2025-10-10 09:43:15.852505509 +0000 UTC m=+0.187096708 container init 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:15 compute-0 podman[73200]: 2025-10-10 09:43:15.863672889 +0000 UTC m=+0.198263988 container start 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:43:15 compute-0 podman[73200]: 2025-10-10 09:43:15.867350613 +0000 UTC m=+0.201941752 container attach 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970556859' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:   cluster:
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     id:     21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     health: HEALTH_OK
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:  
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:   services:
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     mon: 1 daemons, quorum compute-0 (age 0.347283s)
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     mgr: no daemons active
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     osd: 0 osds: 0 up, 0 in
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:  
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:   data:
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     pools:   0 pools, 0 pgs
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     objects: 0 objects, 0 B
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     usage:   0 B used, 0 B / 0 B avail
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:     pgs:     
Oct 10 09:43:16 compute-0 upbeat_bartik[73254]:  
Oct 10 09:43:16 compute-0 systemd[1]: libpod-2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765.scope: Deactivated successfully.
Oct 10 09:43:16 compute-0 conmon[73254]: conmon 2a1a9d2e7cc73ce66d80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765.scope/container/memory.events
Oct 10 09:43:16 compute-0 podman[73200]: 2025-10-10 09:43:16.090429713 +0000 UTC m=+0.425020842 container died 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:43:16 compute-0 podman[73200]: 2025-10-10 09:43:16.129525021 +0000 UTC m=+0.464116110 container remove 2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765 (image=quay.io/ceph/ceph:v19, name=upbeat_bartik, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:16 compute-0 systemd[1]: libpod-conmon-2a1a9d2e7cc73ce66d80af6ebf3dcce6e57f12515f584cd5bac93416b843e765.scope: Deactivated successfully.
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.205390569 +0000 UTC m=+0.046968387 container create 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:16 compute-0 systemd[1]: Started libpod-conmon-36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce.scope.
Oct 10 09:43:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea30701faff2653125938072108f4e262c966ea0a45f9959fc14f5115cfc233/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea30701faff2653125938072108f4e262c966ea0a45f9959fc14f5115cfc233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea30701faff2653125938072108f4e262c966ea0a45f9959fc14f5115cfc233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.188502735 +0000 UTC m=+0.030080573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea30701faff2653125938072108f4e262c966ea0a45f9959fc14f5115cfc233/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.30075707 +0000 UTC m=+0.142334958 container init 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.312060764 +0000 UTC m=+0.153638582 container start 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.315693827 +0000 UTC m=+0.157271685 container attach 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1407388480' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:43:16 compute-0 ceph-mon[73199]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1407388480' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 09:43:16 compute-0 stoic_clarke[73309]: 
Oct 10 09:43:16 compute-0 stoic_clarke[73309]: [global]
Oct 10 09:43:16 compute-0 stoic_clarke[73309]:         fsid = 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:16 compute-0 stoic_clarke[73309]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 10 09:43:16 compute-0 systemd[1]: libpod-36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce.scope: Deactivated successfully.
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.563972293 +0000 UTC m=+0.405550121 container died 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fea30701faff2653125938072108f4e262c966ea0a45f9959fc14f5115cfc233-merged.mount: Deactivated successfully.
Oct 10 09:43:16 compute-0 podman[73293]: 2025-10-10 09:43:16.611059873 +0000 UTC m=+0.452637741 container remove 36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce (image=quay.io/ceph/ceph:v19, name=stoic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:43:16 compute-0 systemd[1]: libpod-conmon-36ac6fd7bcb4686e5a459ec16b0629c393f04231ef14943e06efc71bfb0cebce.scope: Deactivated successfully.
Oct 10 09:43:16 compute-0 podman[73346]: 2025-10-10 09:43:16.72137078 +0000 UTC m=+0.078542379 container create a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: monmap epoch 1
Oct 10 09:43:16 compute-0 ceph-mon[73199]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:16 compute-0 ceph-mon[73199]: last_changed 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:16 compute-0 ceph-mon[73199]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:16 compute-0 ceph-mon[73199]: min_mon_release 19 (squid)
Oct 10 09:43:16 compute-0 ceph-mon[73199]: election_strategy: 1
Oct 10 09:43:16 compute-0 ceph-mon[73199]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:43:16 compute-0 ceph-mon[73199]: fsmap 
Oct 10 09:43:16 compute-0 ceph-mon[73199]: osdmap e1: 0 total, 0 up, 0 in
Oct 10 09:43:16 compute-0 ceph-mon[73199]: mgrmap e1: no daemons active
Oct 10 09:43:16 compute-0 ceph-mon[73199]: from='client.? 192.168.122.100:0/1970556859' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 09:43:16 compute-0 ceph-mon[73199]: from='client.? 192.168.122.100:0/1407388480' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:43:16 compute-0 ceph-mon[73199]: from='client.? 192.168.122.100:0/1407388480' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 09:43:16 compute-0 systemd[1]: Started libpod-conmon-a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879.scope.
Oct 10 09:43:16 compute-0 podman[73346]: 2025-10-10 09:43:16.688363319 +0000 UTC m=+0.045534988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c6db4bf893ccd4da09c2538a337612d8c08f98d9d81fc0190626e755a76af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c6db4bf893ccd4da09c2538a337612d8c08f98d9d81fc0190626e755a76af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c6db4bf893ccd4da09c2538a337612d8c08f98d9d81fc0190626e755a76af9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c6db4bf893ccd4da09c2538a337612d8c08f98d9d81fc0190626e755a76af9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:16 compute-0 podman[73346]: 2025-10-10 09:43:16.839485134 +0000 UTC m=+0.196656743 container init a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:16 compute-0 podman[73346]: 2025-10-10 09:43:16.855944573 +0000 UTC m=+0.213116172 container start a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:16 compute-0 podman[73346]: 2025-10-10 09:43:16.860177177 +0000 UTC m=+0.217348786 container attach a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:17 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:43:17 compute-0 ceph-mon[73199]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901646882' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:43:17 compute-0 systemd[1]: libpod-a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879.scope: Deactivated successfully.
Oct 10 09:43:17 compute-0 podman[73346]: 2025-10-10 09:43:17.113601318 +0000 UTC m=+0.470772887 container died a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7c6db4bf893ccd4da09c2538a337612d8c08f98d9d81fc0190626e755a76af9-merged.mount: Deactivated successfully.
Oct 10 09:43:17 compute-0 podman[73346]: 2025-10-10 09:43:17.166886888 +0000 UTC m=+0.524058497 container remove a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879 (image=quay.io/ceph/ceph:v19, name=reverent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:43:17 compute-0 systemd[1]: libpod-conmon-a0b19125766bd2d1ea901281c193aa2c5e329ca73addca2094423561a94fb879.scope: Deactivated successfully.
Oct 10 09:43:17 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:43:17 compute-0 ceph-mon[73199]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 10 09:43:17 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 10 09:43:17 compute-0 ceph-mon[73199]: mon.compute-0@0(leader) e1 shutdown
Oct 10 09:43:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0[73195]: 2025-10-10T09:43:17.417+0000 7fcc1e2f2640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 10 09:43:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0[73195]: 2025-10-10T09:43:17.417+0000 7fcc1e2f2640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 10 09:43:17 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 09:43:17 compute-0 ceph-mon[73199]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 09:43:17 compute-0 podman[73429]: 2025-10-10 09:43:17.593652509 +0000 UTC m=+0.211153566 container died d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1094c62279f8fef5d9e6e88c22f2df22720d7d6abf0721fe539c24176e4b18-merged.mount: Deactivated successfully.
Oct 10 09:43:17 compute-0 podman[73429]: 2025-10-10 09:43:17.63928283 +0000 UTC m=+0.256783927 container remove d2c0e3da2cad0aeb8e75de2e8b099162cbd308d61dd9562c893f02a7ae79c68d (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:43:17 compute-0 bash[73429]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0
Oct 10 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:43:17 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mon.compute-0.service: Deactivated successfully.
Oct 10 09:43:17 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:17 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mon.compute-0.service: Consumed 1.153s CPU time.
Oct 10 09:43:17 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:43:18 compute-0 podman[73532]: 2025-10-10 09:43:18.142471837 +0000 UTC m=+0.067599438 container create 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 09:43:18 compute-0 podman[73532]: 2025-10-10 09:43:18.111230925 +0000 UTC m=+0.036358556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f54e4e53d76ff6bc2c002eeaa5f2cfdc0602e03a79916718c6a3cba9c61e97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f54e4e53d76ff6bc2c002eeaa5f2cfdc0602e03a79916718c6a3cba9c61e97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f54e4e53d76ff6bc2c002eeaa5f2cfdc0602e03a79916718c6a3cba9c61e97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f54e4e53d76ff6bc2c002eeaa5f2cfdc0602e03a79916718c6a3cba9c61e97/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 podman[73532]: 2025-10-10 09:43:18.231441129 +0000 UTC m=+0.156568740 container init 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:18 compute-0 podman[73532]: 2025-10-10 09:43:18.246701118 +0000 UTC m=+0.171828709 container start 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:43:18 compute-0 bash[73532]: 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e
Oct 10 09:43:18 compute-0 systemd[1]: Started Ceph mon.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:18 compute-0 ceph-mon[73551]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: pidfile_write: ignore empty --pid-file
Oct 10 09:43:18 compute-0 ceph-mon[73551]: load: jerasure load: lrc 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Git sha 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: DB SUMMARY
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: DB Session ID:  X51S9MA51CSPL9DJ2ZU1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                                     Options.env: 0x558b2c1e1c20
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                                Options.info_log: 0x558b2d7b5ac0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                                 Options.wal_dir: 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                    Options.write_buffer_manager: 0x558b2d7b9900
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                               Options.row_cache: None
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                              Options.wal_filter: None
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.wal_compression: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.max_background_jobs: 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Compression algorithms supported:
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kZSTD supported: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:           Options.merge_operator: 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:        Options.compaction_filter: None
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b2d7b4aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558b2d7d9350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.compression: NoCompression
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.num_levels: 7
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b11c2339-35ff-491c-b185-eda5e2ea0ba8
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089398304597, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089398308565, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089398, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089398308694, "job": 1, "event": "recovery_finished"}
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558b2d7dae00
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: DB pointer 0x558b2d8e4000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:43:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.32 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.32 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b2d7d9350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 09:43:18 compute-0 ceph-mon[73551]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???) e1 preinit fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).mds e1 new map
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-10-10T09:43:15:731413+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 10 09:43:18 compute-0 ceph-mon[73551]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : last_changed 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : created 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 10 09:43:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 10 09:43:18 compute-0 podman[73552]: 2025-10-10 09:43:18.427620435 +0000 UTC m=+0.132412410 container create 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:43:18 compute-0 podman[73552]: 2025-10-10 09:43:18.337618757 +0000 UTC m=+0.042410742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: monmap epoch 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:18 compute-0 ceph-mon[73551]: last_changed 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:43:18 compute-0 ceph-mon[73551]: min_mon_release 19 (squid)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: election_strategy: 1
Oct 10 09:43:18 compute-0 ceph-mon[73551]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:43:18 compute-0 ceph-mon[73551]: fsmap 
Oct 10 09:43:18 compute-0 ceph-mon[73551]: osdmap e1: 0 total, 0 up, 0 in
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mgrmap e1: no daemons active
Oct 10 09:43:18 compute-0 systemd[1]: Started libpod-conmon-308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c.scope.
Oct 10 09:43:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765da66035a9ddde83121eb04d6620ffd11f8ee0da9789b7cd4be6aefa7a9622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765da66035a9ddde83121eb04d6620ffd11f8ee0da9789b7cd4be6aefa7a9622/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765da66035a9ddde83121eb04d6620ffd11f8ee0da9789b7cd4be6aefa7a9622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:18 compute-0 podman[73552]: 2025-10-10 09:43:18.730955882 +0000 UTC m=+0.435747907 container init 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:18 compute-0 podman[73552]: 2025-10-10 09:43:18.743104234 +0000 UTC m=+0.447896209 container start 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:43:18 compute-0 podman[73552]: 2025-10-10 09:43:18.869010522 +0000 UTC m=+0.573802467 container attach 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct 10 09:43:19 compute-0 systemd[1]: libpod-308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c.scope: Deactivated successfully.
Oct 10 09:43:19 compute-0 podman[73629]: 2025-10-10 09:43:19.050099496 +0000 UTC m=+0.023852752 container died 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Oct 10 09:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-765da66035a9ddde83121eb04d6620ffd11f8ee0da9789b7cd4be6aefa7a9622-merged.mount: Deactivated successfully.
Oct 10 09:43:20 compute-0 podman[73629]: 2025-10-10 09:43:20.213419012 +0000 UTC m=+1.187172298 container remove 308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c (image=quay.io/ceph/ceph:v19, name=cranky_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:20 compute-0 systemd[1]: libpod-conmon-308f55e607d148e24ea96cdbeff3c1318389a17d6bc340cbc65c0dd6aa14518c.scope: Deactivated successfully.
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.33579607 +0000 UTC m=+0.076654445 container create d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 09:43:20 compute-0 systemd[1]: Started libpod-conmon-d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c.scope.
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.301142003 +0000 UTC m=+0.042000418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:20 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923b3d2642edc4eebc59b9dc001e20c2806892c5d09f143f60a873dbf1d654/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923b3d2642edc4eebc59b9dc001e20c2806892c5d09f143f60a873dbf1d654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46923b3d2642edc4eebc59b9dc001e20c2806892c5d09f143f60a873dbf1d654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.43820475 +0000 UTC m=+0.179063165 container init d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.44672881 +0000 UTC m=+0.187587155 container start d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.450971494 +0000 UTC m=+0.191829909 container attach d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 09:43:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct 10 09:43:20 compute-0 systemd[1]: libpod-d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c.scope: Deactivated successfully.
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.680809303 +0000 UTC m=+0.421667628 container died d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-46923b3d2642edc4eebc59b9dc001e20c2806892c5d09f143f60a873dbf1d654-merged.mount: Deactivated successfully.
Oct 10 09:43:20 compute-0 podman[73644]: 2025-10-10 09:43:20.72103373 +0000 UTC m=+0.461892075 container remove d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c (image=quay.io/ceph/ceph:v19, name=peaceful_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:20 compute-0 systemd[1]: libpod-conmon-d3266446b60a308147e1d7fdd9e8c4ff075284c48a5816baffa588919270318c.scope: Deactivated successfully.
Oct 10 09:43:20 compute-0 systemd[1]: Reloading.
Oct 10 09:43:20 compute-0 systemd-sysv-generator[73726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:20 compute-0 systemd-rc-local-generator[73722]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:21 compute-0 systemd[1]: Reloading.
Oct 10 09:43:21 compute-0 systemd-rc-local-generator[73764]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:21 compute-0 systemd-sysv-generator[73769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:21 compute-0 systemd[1]: Starting Ceph mgr.compute-0.xkdepb for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:43:21 compute-0 podman[73825]: 2025-10-10 09:43:21.562243472 +0000 UTC m=+0.042471314 container create 8d50af9bcf4066d29cfd579660d6625c2d189f518a5544e98ab7d66aa40f6902 (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c17381a328dfc8be13737365f22630e001e812c683601c49fc13ce69306b77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c17381a328dfc8be13737365f22630e001e812c683601c49fc13ce69306b77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c17381a328dfc8be13737365f22630e001e812c683601c49fc13ce69306b77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c17381a328dfc8be13737365f22630e001e812c683601c49fc13ce69306b77/merged/var/lib/ceph/mgr/ceph-compute-0.xkdepb supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 podman[73825]: 2025-10-10 09:43:21.63459683 +0000 UTC m=+0.114824692 container init 8d50af9bcf4066d29cfd579660d6625c2d189f518a5544e98ab7d66aa40f6902 (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:21 compute-0 podman[73825]: 2025-10-10 09:43:21.540269395 +0000 UTC m=+0.020497277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:21 compute-0 podman[73825]: 2025-10-10 09:43:21.652103245 +0000 UTC m=+0.132331077 container start 8d50af9bcf4066d29cfd579660d6625c2d189f518a5544e98ab7d66aa40f6902 (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 10 09:43:21 compute-0 bash[73825]: 8d50af9bcf4066d29cfd579660d6625c2d189f518a5544e98ab7d66aa40f6902
Oct 10 09:43:21 compute-0 systemd[1]: Started Ceph mgr.compute-0.xkdepb for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:43:21 compute-0 podman[73846]: 2025-10-10 09:43:21.735712436 +0000 UTC m=+0.047504105 container create 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:43:21 compute-0 systemd[1]: Started libpod-conmon-3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555.scope.
Oct 10 09:43:21 compute-0 podman[73846]: 2025-10-10 09:43:21.716041498 +0000 UTC m=+0.027833187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:21 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2c251c42d62c9c0b8454e17c8535879eba6ed8713330d1af0a0015781cdd55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2c251c42d62c9c0b8454e17c8535879eba6ed8713330d1af0a0015781cdd55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2c251c42d62c9c0b8454e17c8535879eba6ed8713330d1af0a0015781cdd55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:21 compute-0 podman[73846]: 2025-10-10 09:43:21.845907391 +0000 UTC m=+0.157699100 container init 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:43:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:21.849+0000 7f345cd79140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:43:21 compute-0 podman[73846]: 2025-10-10 09:43:21.85472944 +0000 UTC m=+0.166521109 container start 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:21 compute-0 podman[73846]: 2025-10-10 09:43:21.860792967 +0000 UTC m=+0.172584646 container attach 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:43:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:21.937+0000 7f345cd79140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:43:21 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:43:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 09:43:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194332607' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]: 
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]: {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "health": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "status": "HEALTH_OK",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "checks": {},
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "mutes": []
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "election_epoch": 5,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "quorum": [
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         0
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     ],
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "quorum_names": [
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "compute-0"
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     ],
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "quorum_age": 3,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "monmap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "epoch": 1,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "min_mon_release_name": "squid",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_mons": 1
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "osdmap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "epoch": 1,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_osds": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_up_osds": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "osd_up_since": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_in_osds": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "osd_in_since": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_remapped_pgs": 0
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "pgmap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "pgs_by_state": [],
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_pgs": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_pools": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_objects": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "data_bytes": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "bytes_used": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "bytes_avail": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "bytes_total": 0
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "fsmap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "epoch": 1,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "btime": "2025-10-10T09:43:15:731413+0000",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "by_rank": [],
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "up:standby": 0
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "mgrmap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "available": false,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "num_standbys": 0,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "modules": [
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:             "iostat",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:             "nfs",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:             "restful"
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         ],
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "services": {}
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "servicemap": {
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "epoch": 1,
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "modified": "2025-10-10T09:43:15.734386+0000",
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:         "services": {}
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     },
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]:     "progress_events": {}
Oct 10 09:43:22 compute-0 intelligent_driscoll[73882]: }
Oct 10 09:43:22 compute-0 systemd[1]: libpod-3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555.scope: Deactivated successfully.
Oct 10 09:43:22 compute-0 podman[73846]: 2025-10-10 09:43:22.101709912 +0000 UTC m=+0.413501581 container died 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca2c251c42d62c9c0b8454e17c8535879eba6ed8713330d1af0a0015781cdd55-merged.mount: Deactivated successfully.
Oct 10 09:43:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4194332607' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:22 compute-0 podman[73846]: 2025-10-10 09:43:22.138294645 +0000 UTC m=+0.450086304 container remove 3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555 (image=quay.io/ceph/ceph:v19, name=intelligent_driscoll, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:22 compute-0 systemd[1]: libpod-conmon-3c840c5d17f304d44329f021c84301daf67424f8e21710cbad5486dbb3504555.scope: Deactivated successfully.
Oct 10 09:43:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:43:22 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:43:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:22.784+0000 7f345cd79140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:43:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:23.431+0000 7f345cd79140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:23.603+0000 7f345cd79140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:23.675+0000 7f345cd79140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:23.807+0000 7f345cd79140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:43:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.223890609 +0000 UTC m=+0.054078559 container create bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:43:24 compute-0 systemd[1]: Started libpod-conmon-bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456.scope.
Oct 10 09:43:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b56eabf44e83ce272bb12d8be2d15cc7aa2ccd76743eeeb5fce8b96fa6321d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b56eabf44e83ce272bb12d8be2d15cc7aa2ccd76743eeeb5fce8b96fa6321d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b56eabf44e83ce272bb12d8be2d15cc7aa2ccd76743eeeb5fce8b96fa6321d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.199419667 +0000 UTC m=+0.029607637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.323847345 +0000 UTC m=+0.154035335 container init bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.333671819 +0000 UTC m=+0.163859769 container start bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.337734567 +0000 UTC m=+0.167922607 container attach bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:43:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 09:43:24 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070156029' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:24 compute-0 jolly_jemison[73947]: 
Oct 10 09:43:24 compute-0 jolly_jemison[73947]: {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "health": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "status": "HEALTH_OK",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "checks": {},
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "mutes": []
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "election_epoch": 5,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "quorum": [
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         0
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     ],
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "quorum_names": [
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "compute-0"
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     ],
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "quorum_age": 6,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "monmap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "epoch": 1,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "min_mon_release_name": "squid",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_mons": 1
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "osdmap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "epoch": 1,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_osds": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_up_osds": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "osd_up_since": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_in_osds": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "osd_in_since": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_remapped_pgs": 0
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "pgmap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "pgs_by_state": [],
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_pgs": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_pools": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_objects": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "data_bytes": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "bytes_used": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "bytes_avail": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "bytes_total": 0
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "fsmap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "epoch": 1,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "btime": "2025-10-10T09:43:15:731413+0000",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "by_rank": [],
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "up:standby": 0
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "mgrmap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "available": false,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "num_standbys": 0,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "modules": [
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:             "iostat",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:             "nfs",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:             "restful"
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         ],
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "services": {}
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "servicemap": {
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "epoch": 1,
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "modified": "2025-10-10T09:43:15.734386+0000",
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:         "services": {}
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     },
Oct 10 09:43:24 compute-0 jolly_jemison[73947]:     "progress_events": {}
Oct 10 09:43:24 compute-0 jolly_jemison[73947]: }
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:43:24 compute-0 systemd[1]: libpod-bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456.scope: Deactivated successfully.
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.547398441 +0000 UTC m=+0.377586431 container died bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1070156029' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-67b56eabf44e83ce272bb12d8be2d15cc7aa2ccd76743eeeb5fce8b96fa6321d-merged.mount: Deactivated successfully.
Oct 10 09:43:24 compute-0 podman[73931]: 2025-10-10 09:43:24.606508129 +0000 UTC m=+0.436696109 container remove bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456 (image=quay.io/ceph/ceph:v19, name=jolly_jemison, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:24 compute-0 systemd[1]: libpod-conmon-bad9927c460ffb0fda5cb757e3454261962cc92deb3ce069a189ef3a46876456.scope: Deactivated successfully.
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:43:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:43:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:24.766+0000 7f345cd79140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:24.999+0000 7f345cd79140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.070+0000 7f345cd79140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.136+0000 7f345cd79140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.217+0000 7f345cd79140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.292+0000 7f345cd79140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.648+0000 7f345cd79140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:25.744+0000 7f345cd79140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:43:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:26.187+0000 7f345cd79140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:43:26 compute-0 podman[73986]: 2025-10-10 09:43:26.681368168 +0000 UTC m=+0.046075537 container create e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:43:26 compute-0 systemd[1]: Started libpod-conmon-e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684.scope.
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:26.743+0000 7f345cd79140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:43:26 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1d3c37b533ca82b69a4e77574cb9a0db8f650bb6299af96d60d8a05be681d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1d3c37b533ca82b69a4e77574cb9a0db8f650bb6299af96d60d8a05be681d6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1d3c37b533ca82b69a4e77574cb9a0db8f650bb6299af96d60d8a05be681d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:26 compute-0 podman[73986]: 2025-10-10 09:43:26.662086643 +0000 UTC m=+0.026794032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:26 compute-0 podman[73986]: 2025-10-10 09:43:26.784833994 +0000 UTC m=+0.149541443 container init e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:26 compute-0 podman[73986]: 2025-10-10 09:43:26.790628911 +0000 UTC m=+0.155336310 container start e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:26 compute-0 podman[73986]: 2025-10-10 09:43:26.794995599 +0000 UTC m=+0.159703048 container attach e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:26.816+0000 7f345cd79140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:43:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:26.892+0000 7f345cd79140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:43:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:43:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 09:43:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007822069' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:26 compute-0 charming_noyce[74002]: 
Oct 10 09:43:26 compute-0 charming_noyce[74002]: {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "health": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "status": "HEALTH_OK",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "checks": {},
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "mutes": []
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "election_epoch": 5,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "quorum": [
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         0
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     ],
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "quorum_names": [
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "compute-0"
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     ],
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "quorum_age": 8,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "monmap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "epoch": 1,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "min_mon_release_name": "squid",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_mons": 1
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "osdmap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "epoch": 1,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_osds": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_up_osds": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "osd_up_since": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_in_osds": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "osd_in_since": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_remapped_pgs": 0
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "pgmap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "pgs_by_state": [],
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_pgs": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_pools": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_objects": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "data_bytes": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "bytes_used": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "bytes_avail": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "bytes_total": 0
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "fsmap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "epoch": 1,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "btime": "2025-10-10T09:43:15:731413+0000",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "by_rank": [],
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "up:standby": 0
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "mgrmap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "available": false,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "num_standbys": 0,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "modules": [
Oct 10 09:43:26 compute-0 charming_noyce[74002]:             "iostat",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:             "nfs",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:             "restful"
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         ],
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "services": {}
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "servicemap": {
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "epoch": 1,
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "modified": "2025-10-10T09:43:15.734386+0000",
Oct 10 09:43:26 compute-0 charming_noyce[74002]:         "services": {}
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     },
Oct 10 09:43:26 compute-0 charming_noyce[74002]:     "progress_events": {}
Oct 10 09:43:26 compute-0 charming_noyce[74002]: }
Oct 10 09:43:27 compute-0 systemd[1]: libpod-e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684.scope: Deactivated successfully.
Oct 10 09:43:27 compute-0 podman[73986]: 2025-10-10 09:43:27.010052696 +0000 UTC m=+0.374760055 container died e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 09:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b1d3c37b533ca82b69a4e77574cb9a0db8f650bb6299af96d60d8a05be681d6-merged.mount: Deactivated successfully.
Oct 10 09:43:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1007822069' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:27 compute-0 podman[73986]: 2025-10-10 09:43:27.04812729 +0000 UTC m=+0.412834649 container remove e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684 (image=quay.io/ceph/ceph:v19, name=charming_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.049+0000 7f345cd79140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:43:27 compute-0 systemd[1]: libpod-conmon-e630e71be73efc20da25191448e594a847ef7aac81eec19a82038fcb81c6c684.scope: Deactivated successfully.
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.119+0000 7f345cd79140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.282+0000 7f345cd79140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.506+0000 7f345cd79140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.764+0000 7f345cd79140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:27.831+0000 7f345cd79140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x558abf7c69c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr handle_mgr_map Activating!
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr handle_mgr_map I am now activating
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.xkdepb(active, starting, since 0.0154196s)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Manager daemon compute-0.xkdepb is now available
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: balancer
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer INFO root] Starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: crash
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:43:27
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [balancer INFO root] No pools available
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: devicehealth
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: iostat
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: nfs
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: orchestrator
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: pg_autoscaler
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: progress
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [progress INFO root] Loading...
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [progress INFO root] No stored events to load
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded [] historic events
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] recovery thread starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] starting setup
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: rbd_support
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: restful
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: status
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [restful WARNING root] server not running: no certificate configured
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: telemetry
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] PerfHandler: starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TaskHandler: starting
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"} v 0)
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: [rbd_support INFO root] setup complete
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct 10 09:43:27 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: volumes
Oct 10 09:43:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:28 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:43:28 compute-0 ceph-mon[73551]: mgrmap e2: compute-0.xkdepb(active, starting, since 0.0154196s)
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:28 compute-0 ceph-mon[73551]: from='mgr.14102 192.168.122.100:0/3196569721' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:28 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.xkdepb(active, since 1.02897s)
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.124119488 +0000 UTC m=+0.053057954 container create 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:29 compute-0 systemd[1]: Started libpod-conmon-11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e.scope.
Oct 10 09:43:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.097030068 +0000 UTC m=+0.025968614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e95ce5b3ef8188cf5802725addff02d324fb7dc84162c310d0037e2fe71638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e95ce5b3ef8188cf5802725addff02d324fb7dc84162c310d0037e2fe71638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e95ce5b3ef8188cf5802725addff02d324fb7dc84162c310d0037e2fe71638/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.212858692 +0000 UTC m=+0.141797258 container init 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.226866788 +0000 UTC m=+0.155805294 container start 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.230612936 +0000 UTC m=+0.159551442 container attach 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 09:43:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570642765' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]: 
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]: {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "health": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "status": "HEALTH_OK",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "checks": {},
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "mutes": []
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "election_epoch": 5,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "quorum": [
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         0
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     ],
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "quorum_names": [
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "compute-0"
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     ],
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "quorum_age": 11,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "monmap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "epoch": 1,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "min_mon_release_name": "squid",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_mons": 1
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "osdmap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "epoch": 1,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_osds": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_up_osds": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "osd_up_since": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_in_osds": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "osd_in_since": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_remapped_pgs": 0
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "pgmap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "pgs_by_state": [],
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_pgs": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_pools": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_objects": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "data_bytes": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "bytes_used": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "bytes_avail": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "bytes_total": 0
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "fsmap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "epoch": 1,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "btime": "2025-10-10T09:43:15:731413+0000",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "by_rank": [],
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "up:standby": 0
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "mgrmap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "available": true,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "num_standbys": 0,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "modules": [
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:             "iostat",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:             "nfs",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:             "restful"
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         ],
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "services": {}
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "servicemap": {
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "epoch": 1,
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "modified": "2025-10-10T09:43:15.734386+0000",
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:         "services": {}
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     },
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]:     "progress_events": {}
Oct 10 09:43:29 compute-0 jolly_dhawan[74135]: }
Oct 10 09:43:29 compute-0 systemd[1]: libpod-11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e.scope: Deactivated successfully.
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.699860109 +0000 UTC m=+0.628798605 container died 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e95ce5b3ef8188cf5802725addff02d324fb7dc84162c310d0037e2fe71638-merged.mount: Deactivated successfully.
Oct 10 09:43:29 compute-0 podman[74120]: 2025-10-10 09:43:29.738228333 +0000 UTC m=+0.667166789 container remove 11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e (image=quay.io/ceph/ceph:v19, name=jolly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:43:29 compute-0 systemd[1]: libpod-conmon-11149249c260510085cc4b22f6936fefbecf5b1232bb1bc9fd295aa60bc0036e.scope: Deactivated successfully.
Oct 10 09:43:29 compute-0 podman[74176]: 2025-10-10 09:43:29.833919494 +0000 UTC m=+0.061023925 container create ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 09:43:29 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:29 compute-0 ceph-mon[73551]: mgrmap e3: compute-0.xkdepb(active, since 1.02897s)
Oct 10 09:43:29 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3570642765' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 09:43:29 compute-0 systemd[1]: Started libpod-conmon-ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5.scope.
Oct 10 09:43:29 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.xkdepb(active, since 2s)
Oct 10 09:43:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791f5d31b10b3a56dd9e237d9f9799784e382bc41eb45ce8abc2beeabe4783a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791f5d31b10b3a56dd9e237d9f9799784e382bc41eb45ce8abc2beeabe4783a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791f5d31b10b3a56dd9e237d9f9799784e382bc41eb45ce8abc2beeabe4783a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 podman[74176]: 2025-10-10 09:43:29.812205056 +0000 UTC m=+0.039309487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791f5d31b10b3a56dd9e237d9f9799784e382bc41eb45ce8abc2beeabe4783a7/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:29 compute-0 podman[74176]: 2025-10-10 09:43:29.922803464 +0000 UTC m=+0.149907905 container init ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 09:43:29 compute-0 podman[74176]: 2025-10-10 09:43:29.936714647 +0000 UTC m=+0.163819088 container start ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:43:29 compute-0 podman[74176]: 2025-10-10 09:43:29.943379774 +0000 UTC m=+0.170484215 container attach ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 10 09:43:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4009619' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:43:30 compute-0 ecstatic_lovelace[74192]: 
Oct 10 09:43:30 compute-0 ecstatic_lovelace[74192]: [global]
Oct 10 09:43:30 compute-0 ecstatic_lovelace[74192]:         fsid = 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:30 compute-0 ecstatic_lovelace[74192]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 10 09:43:30 compute-0 systemd[1]: libpod-ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5.scope: Deactivated successfully.
Oct 10 09:43:30 compute-0 podman[74218]: 2025-10-10 09:43:30.376573772 +0000 UTC m=+0.029911388 container died ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-791f5d31b10b3a56dd9e237d9f9799784e382bc41eb45ce8abc2beeabe4783a7-merged.mount: Deactivated successfully.
Oct 10 09:43:30 compute-0 podman[74218]: 2025-10-10 09:43:30.415059 +0000 UTC m=+0.068396596 container remove ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5 (image=quay.io/ceph/ceph:v19, name=ecstatic_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:43:30 compute-0 systemd[1]: libpod-conmon-ea6412a4b3214d9c2dedbf3f08c3e5b830bccdec6faa7f8f574dce668e814ee5.scope: Deactivated successfully.
Oct 10 09:43:30 compute-0 podman[74233]: 2025-10-10 09:43:30.475438751 +0000 UTC m=+0.037715972 container create ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:43:30 compute-0 systemd[1]: Started libpod-conmon-ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e.scope.
Oct 10 09:43:30 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5ade15fb1fa652a18f230dc4b92b552b289da253672ce6237968dfd83d1302/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5ade15fb1fa652a18f230dc4b92b552b289da253672ce6237968dfd83d1302/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5ade15fb1fa652a18f230dc4b92b552b289da253672ce6237968dfd83d1302/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:30 compute-0 podman[74233]: 2025-10-10 09:43:30.54603546 +0000 UTC m=+0.108312761 container init ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:43:30 compute-0 podman[74233]: 2025-10-10 09:43:30.458470205 +0000 UTC m=+0.020747436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:30 compute-0 podman[74233]: 2025-10-10 09:43:30.556541908 +0000 UTC m=+0.118819119 container start ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:30 compute-0 podman[74233]: 2025-10-10 09:43:30.559418054 +0000 UTC m=+0.121695355 container attach ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 09:43:30 compute-0 ceph-mon[73551]: mgrmap e4: compute-0.xkdepb(active, since 2s)
Oct 10 09:43:30 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4009619' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:43:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct 10 09:43:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933464431' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:31 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2933464431' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 10 09:43:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933464431' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  1: '-n'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  2: 'mgr.compute-0.xkdepb'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  3: '-f'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  4: '--setuser'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  5: 'ceph'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  6: '--setgroup'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  7: 'ceph'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 10 09:43:31 compute-0 ceph-mgr[73845]: mgr respawn  exe_path /proc/self/exe
Oct 10 09:43:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.xkdepb(active, since 4s)
Oct 10 09:43:31 compute-0 systemd[1]: libpod-ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e.scope: Deactivated successfully.
Oct 10 09:43:31 compute-0 conmon[74249]: conmon ae4c78b34051fdb66b22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e.scope/container/memory.events
Oct 10 09:43:31 compute-0 podman[74233]: 2025-10-10 09:43:31.929307971 +0000 UTC m=+1.491585182 container died ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5ade15fb1fa652a18f230dc4b92b552b289da253672ce6237968dfd83d1302-merged.mount: Deactivated successfully.
Oct 10 09:43:31 compute-0 podman[74233]: 2025-10-10 09:43:31.970504531 +0000 UTC m=+1.532781762 container remove ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e (image=quay.io/ceph/ceph:v19, name=determined_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:43:31 compute-0 systemd[1]: libpod-conmon-ae4c78b34051fdb66b22ae020bef281506748c497c58f159d9c5f42c6799422e.scope: Deactivated successfully.
Oct 10 09:43:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setuser ceph since I am not root
Oct 10 09:43:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setgroup ceph since I am not root
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:43:32 compute-0 podman[74285]: 2025-10-10 09:43:32.051307697 +0000 UTC m=+0.051142379 container create 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:43:32 compute-0 systemd[1]: Started libpod-conmon-41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e.scope.
Oct 10 09:43:32 compute-0 podman[74285]: 2025-10-10 09:43:32.026436351 +0000 UTC m=+0.026271073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf1f2804c8848ca8dc3026d49230722b2c2f9c46ddd591d76d1c4b975ac5256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf1f2804c8848ca8dc3026d49230722b2c2f9c46ddd591d76d1c4b975ac5256/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf1f2804c8848ca8dc3026d49230722b2c2f9c46ddd591d76d1c4b975ac5256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 podman[74285]: 2025-10-10 09:43:32.142560746 +0000 UTC m=+0.142395518 container init 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:43:32 compute-0 podman[74285]: 2025-10-10 09:43:32.148351823 +0000 UTC m=+0.148186525 container start 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:43:32 compute-0 podman[74285]: 2025-10-10 09:43:32.151927085 +0000 UTC m=+0.151761877 container attach 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:43:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:32.201+0000 7ff2292b6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:43:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:32.281+0000 7ff2292b6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:43:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 09:43:32 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/845659328' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]: {
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]:     "epoch": 5,
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]:     "available": true,
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]:     "active_name": "compute-0.xkdepb",
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]:     "num_standby": 0
Oct 10 09:43:32 compute-0 nice_chatterjee[74321]: }
Oct 10 09:43:32 compute-0 systemd[1]: libpod-41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e.scope: Deactivated successfully.
Oct 10 09:43:32 compute-0 podman[74347]: 2025-10-10 09:43:32.625701132 +0000 UTC m=+0.033115516 container died 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf1f2804c8848ca8dc3026d49230722b2c2f9c46ddd591d76d1c4b975ac5256-merged.mount: Deactivated successfully.
Oct 10 09:43:32 compute-0 podman[74347]: 2025-10-10 09:43:32.667023586 +0000 UTC m=+0.074437970 container remove 41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e (image=quay.io/ceph/ceph:v19, name=nice_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:43:32 compute-0 systemd[1]: libpod-conmon-41f380956a009c15f6868bb417052aaee709cb88d8c20eb678fafeb53f73464e.scope: Deactivated successfully.
Oct 10 09:43:32 compute-0 podman[74374]: 2025-10-10 09:43:32.759220779 +0000 UTC m=+0.062910199 container create 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:43:32 compute-0 systemd[1]: Started libpod-conmon-2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf.scope.
Oct 10 09:43:32 compute-0 podman[74374]: 2025-10-10 09:43:32.728199765 +0000 UTC m=+0.031889245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d36733b9e9ac0a4e9310edf8d349bd20c6224e0fe0bda0a36ddde842621139/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d36733b9e9ac0a4e9310edf8d349bd20c6224e0fe0bda0a36ddde842621139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d36733b9e9ac0a4e9310edf8d349bd20c6224e0fe0bda0a36ddde842621139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:32 compute-0 podman[74374]: 2025-10-10 09:43:32.852984934 +0000 UTC m=+0.156674334 container init 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:32 compute-0 podman[74374]: 2025-10-10 09:43:32.863125289 +0000 UTC m=+0.166814689 container start 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:32 compute-0 podman[74374]: 2025-10-10 09:43:32.867442456 +0000 UTC m=+0.171131916 container attach 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:32 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2933464431' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 10 09:43:32 compute-0 ceph-mon[73551]: mgrmap e5: compute-0.xkdepb(active, since 4s)
Oct 10 09:43:32 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/845659328' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 09:43:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:33.060+0000 7ff2292b6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:33.688+0000 7ff2292b6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:33.855+0000 7ff2292b6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:33.929+0000 7ff2292b6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:43:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:43:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:34.073+0000 7ff2292b6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:43:34 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.088+0000 7ff2292b6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.308+0000 7ff2292b6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.389+0000 7ff2292b6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.458+0000 7ff2292b6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.537+0000 7ff2292b6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.613+0000 7ff2292b6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:35.975+0000 7ff2292b6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:43:35 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:43:36 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:43:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:36.082+0000 7ff2292b6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:43:36 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:43:36 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:43:36 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:43:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:36.515+0000 7ff2292b6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:43:36 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.095+0000 7ff2292b6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.170+0000 7ff2292b6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.260+0000 7ff2292b6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.408+0000 7ff2292b6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.481+0000 7ff2292b6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.641+0000 7ff2292b6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:37.873+0000 7ff2292b6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:43:37 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:43:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:38.156+0000 7ff2292b6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:43:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:43:38.225+0000 7ff2292b6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xkdepb restarted
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x55c8af1eed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr handle_mgr_map Activating!
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr handle_mgr_map I am now activating
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.xkdepb(active, starting, since 0.0134388s)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: balancer
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Manager daemon compute-0.xkdepb is now available
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:43:38
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [balancer INFO root] No pools available
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: cephadm
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: crash
Oct 10 09:43:38 compute-0 ceph-mon[73551]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:43:38 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:43:38 compute-0 ceph-mon[73551]: osdmap e2: 0 total, 0 up, 0 in
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mgrmap e6: compute-0.xkdepb(active, starting, since 0.0134388s)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mon[73551]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:43:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: devicehealth
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: iostat
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: nfs
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: orchestrator
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: pg_autoscaler
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: progress
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [progress INFO root] Loading...
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [progress INFO root] No stored events to load
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded [] historic events
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] recovery thread starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] starting setup
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: rbd_support
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: restful
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [restful WARNING root] server not running: no certificate configured
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: status
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: telemetry
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] PerfHandler: starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TaskHandler: starting
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"} v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] setup complete
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931862 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:43:38 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: volumes
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct 10 09:43:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.xkdepb(active, since 1.02371s)
Oct 10 09:43:39 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 10 09:43:39 compute-0 confident_lalande[74390]: {
Oct 10 09:43:39 compute-0 confident_lalande[74390]:     "mgrmap_epoch": 7,
Oct 10 09:43:39 compute-0 confident_lalande[74390]:     "initialized": true
Oct 10 09:43:39 compute-0 confident_lalande[74390]: }
Oct 10 09:43:39 compute-0 systemd[1]: libpod-2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf.scope: Deactivated successfully.
Oct 10 09:43:39 compute-0 podman[74374]: 2025-10-10 09:43:39.285631741 +0000 UTC m=+6.589321131 container died 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:43:39 compute-0 ceph-mon[73551]: Found migration_current of "None". Setting to last migration.
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:39 compute-0 ceph-mon[73551]: mgrmap e7: compute-0.xkdepb(active, since 1.02371s)
Oct 10 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d36733b9e9ac0a4e9310edf8d349bd20c6224e0fe0bda0a36ddde842621139-merged.mount: Deactivated successfully.
Oct 10 09:43:39 compute-0 podman[74374]: 2025-10-10 09:43:39.327451107 +0000 UTC m=+6.631140467 container remove 2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf (image=quay.io/ceph/ceph:v19, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:43:39 compute-0 systemd[1]: libpod-conmon-2f6317f6fc35d3b94399206911168b794fb822beae5355414c74a9e5ffcd9daf.scope: Deactivated successfully.
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.393286631 +0000 UTC m=+0.041353961 container create 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:43:39 compute-0 systemd[1]: Started libpod-conmon-0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985.scope.
Oct 10 09:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d9e04275bf34d6cd0a0bae932657565c1e7c7dd4f75423973520acd953a946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d9e04275bf34d6cd0a0bae932657565c1e7c7dd4f75423973520acd953a946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d9e04275bf34d6cd0a0bae932657565c1e7c7dd4f75423973520acd953a946/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.468693142 +0000 UTC m=+0.116760512 container init 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.375239546 +0000 UTC m=+0.023306896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.474151428 +0000 UTC m=+0.122218758 container start 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.477019246 +0000 UTC m=+0.125086616 container attach 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:39 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct 10 09:43:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:43:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:39 compute-0 systemd[1]: libpod-0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985.scope: Deactivated successfully.
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.861847568 +0000 UTC m=+0.509914958 container died 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d9e04275bf34d6cd0a0bae932657565c1e7c7dd4f75423973520acd953a946-merged.mount: Deactivated successfully.
Oct 10 09:43:39 compute-0 podman[74540]: 2025-10-10 09:43:39.904864354 +0000 UTC m=+0.552931694 container remove 0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985 (image=quay.io/ceph/ceph:v19, name=silly_mahavira, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:39 compute-0 systemd[1]: libpod-conmon-0be367762c8232f83971e72be8852719eba48cbefddd388bf86cab3193ac0985.scope: Deactivated successfully.
Oct 10 09:43:39 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:43:39] ENGINE Bus STARTING
Oct 10 09:43:39 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:43:39] ENGINE Bus STARTING
Oct 10 09:43:39 compute-0 podman[74595]: 2025-10-10 09:43:39.977801541 +0000 UTC m=+0.050922828 container create 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:40 compute-0 systemd[1]: Started libpod-conmon-7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb.scope.
Oct 10 09:43:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:39.95458846 +0000 UTC m=+0.027709807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3e4fe70dfee5279388cd30a72b7898201b88f025a0aa4535793cb9aafa552a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3e4fe70dfee5279388cd30a72b7898201b88f025a0aa4535793cb9aafa552a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3e4fe70dfee5279388cd30a72b7898201b88f025a0aa4535793cb9aafa552a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:40.064415214 +0000 UTC m=+0.137536521 container init 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:40.070839053 +0000 UTC m=+0.143960330 container start 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:40.07426934 +0000 UTC m=+0.147390647 container attach 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:43:40] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:43:40] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:43:40] ENGINE Client ('192.168.122.100', 54384) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:43:40] ENGINE Client ('192.168.122.100', 54384) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:43:40] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:43:40] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:43:40] ENGINE Bus STARTED
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:43:40] ENGINE Bus STARTED
Oct 10 09:43:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:43:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct 10 09:43:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO root] Set ssh ssh_user
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 10 09:43:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct 10 09:43:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO root] Set ssh ssh_config
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 10 09:43:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 10 09:43:40 compute-0 silly_franklin[74623]: ssh user set to ceph-admin. sudo will be used
Oct 10 09:43:40 compute-0 systemd[1]: libpod-7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb.scope: Deactivated successfully.
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:40.437221375 +0000 UTC m=+0.510342692 container died 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e3e4fe70dfee5279388cd30a72b7898201b88f025a0aa4535793cb9aafa552a-merged.mount: Deactivated successfully.
Oct 10 09:43:40 compute-0 podman[74595]: 2025-10-10 09:43:40.478516313 +0000 UTC m=+0.551637630 container remove 7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb (image=quay.io/ceph/ceph:v19, name=silly_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:40 compute-0 systemd[1]: libpod-conmon-7356106100bf4ec68179f41d1a863d2eb5e86f389f97db5a7283678cf08a4aeb.scope: Deactivated successfully.
Oct 10 09:43:40 compute-0 podman[74672]: 2025-10-10 09:43:40.557202656 +0000 UTC m=+0.056045502 container create 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:43:40 compute-0 systemd[1]: Started libpod-conmon-87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c.scope.
Oct 10 09:43:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 podman[74672]: 2025-10-10 09:43:40.529914386 +0000 UTC m=+0.028757282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:40 compute-0 podman[74672]: 2025-10-10 09:43:40.641849213 +0000 UTC m=+0.140692039 container init 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:40 compute-0 podman[74672]: 2025-10-10 09:43:40.651964617 +0000 UTC m=+0.150807433 container start 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:43:40 compute-0 podman[74672]: 2025-10-10 09:43:40.654909188 +0000 UTC m=+0.153752034 container attach 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: [10/Oct/2025:09:43:39] ENGINE Bus STARTING
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.xkdepb(active, since 2s)
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct 10 09:43:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: [cephadm INFO root] Set ssh private key
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 10 09:43:41 compute-0 systemd[1]: libpod-87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c.scope: Deactivated successfully.
Oct 10 09:43:41 compute-0 podman[74714]: 2025-10-10 09:43:41.093356068 +0000 UTC m=+0.033169552 container died 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa75ac73cec01de3b29425e7ca51a5230f8cb1a9062d5c4babdfc47f444716de-merged.mount: Deactivated successfully.
Oct 10 09:43:41 compute-0 podman[74714]: 2025-10-10 09:43:41.133518536 +0000 UTC m=+0.073331990 container remove 87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c (image=quay.io/ceph/ceph:v19, name=eager_hellman, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:41 compute-0 systemd[1]: libpod-conmon-87e54aafaa7e32682bc85622eb82ccb51e5cee08c2c6c1957736a354b46e9e2c.scope: Deactivated successfully.
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.243425504 +0000 UTC m=+0.069917115 container create 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:41 compute-0 systemd[1]: Started libpod-conmon-1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62.scope.
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.224084875 +0000 UTC m=+0.050576516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.348478587 +0000 UTC m=+0.174970238 container init 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.361734478 +0000 UTC m=+0.188226089 container start 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.365132044 +0000 UTC m=+0.191623685 container attach 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct 10 09:43:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 10 09:43:41 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 10 09:43:41 compute-0 systemd[1]: libpod-1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62.scope: Deactivated successfully.
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.750487043 +0000 UTC m=+0.576978744 container died 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f9cdf18b5eb7df47b675d57f9845cf0240e32acc13a3f2ea89284db3dbb0a40-merged.mount: Deactivated successfully.
Oct 10 09:43:41 compute-0 podman[74729]: 2025-10-10 09:43:41.798695217 +0000 UTC m=+0.625186818 container remove 1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62 (image=quay.io/ceph/ceph:v19, name=musing_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 09:43:41 compute-0 systemd[1]: libpod-conmon-1f3264c5a01631574a7c5c37fdafe752419191d008a11ea2588dff6027237c62.scope: Deactivated successfully.
Oct 10 09:43:41 compute-0 ceph-mon[73551]: [10/Oct/2025:09:43:40] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:43:41 compute-0 ceph-mon[73551]: [10/Oct/2025:09:43:40] ENGINE Client ('192.168.122.100', 54384) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:43:41 compute-0 ceph-mon[73551]: [10/Oct/2025:09:43:40] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:43:41 compute-0 ceph-mon[73551]: [10/Oct/2025:09:43:40] ENGINE Bus STARTED
Oct 10 09:43:41 compute-0 ceph-mon[73551]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:41 compute-0 ceph-mon[73551]: Set ssh ssh_user
Oct 10 09:43:41 compute-0 ceph-mon[73551]: Set ssh ssh_config
Oct 10 09:43:41 compute-0 ceph-mon[73551]: ssh user set to ceph-admin. sudo will be used
Oct 10 09:43:41 compute-0 ceph-mon[73551]: mgrmap e8: compute-0.xkdepb(active, since 2s)
Oct 10 09:43:41 compute-0 ceph-mon[73551]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:41 compute-0 ceph-mon[73551]: Set ssh ssh_identity_key
Oct 10 09:43:41 compute-0 ceph-mon[73551]: Set ssh private key
Oct 10 09:43:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:41 compute-0 podman[74783]: 2025-10-10 09:43:41.858097523 +0000 UTC m=+0.041223688 container create 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:41 compute-0 systemd[1]: Started libpod-conmon-0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258.scope.
Oct 10 09:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7e5bdade578737b5dfa6add46207a2e7d0b20b55858283a5360b7928815ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7e5bdade578737b5dfa6add46207a2e7d0b20b55858283a5360b7928815ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7e5bdade578737b5dfa6add46207a2e7d0b20b55858283a5360b7928815ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:41 compute-0 podman[74783]: 2025-10-10 09:43:41.837987017 +0000 UTC m=+0.021113232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:41 compute-0 podman[74783]: 2025-10-10 09:43:41.94189107 +0000 UTC m=+0.125017265 container init 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 09:43:41 compute-0 podman[74783]: 2025-10-10 09:43:41.948516435 +0000 UTC m=+0.131642630 container start 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 09:43:41 compute-0 podman[74783]: 2025-10-10 09:43:41.952359956 +0000 UTC m=+0.135486141 container attach 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:42 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:42 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:42 compute-0 compassionate_ellis[74799]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpSi95KFUZ5bFJz1z+RGi0dZz2/7kMh7iwgF0gnIxPl3+VgfTfrHpA6TdjepWH73UXljHjwmGB8disay2GBpJH45BpVxGjt1iVjwyUoejZQ5thxRjor1Gv3AmizyWTGoWcTOs3Gwk1zg1Ot/bUlWRkJ1yP37bV4P1KmjiIm0F+PZVf7t+IZWnpZm9BFCzjyrGmsKyymSXZCS4nBsdxAdwk1rEiohsrS2B46r2bPJsSDJQ4XTzhtiXN36gl0E6ZfTHbh4tGeQRSbRV0opIddh+kMK/EE2bOCQfuABjLG2rjpS/72YrcKj90zNoBKafWb4NJcMPKUPLa/bqEmF3GzXJQ67EnC5aCLkyZJWoDwDljSkdRrqfzyQKJLd+RpRdmWPTInc4KpVrH807znc1ltdNln4vUKY1qfaC60BVIWiFXBBvZqHSxtqwkpnGRfUXd+sn6eeHef0mKLJEndbziziQFLuCj2GWIZtCk/J3UeoMN6h2vRYYbZb6F+NUZvm5i4AM= zuul@controller
Oct 10 09:43:42 compute-0 systemd[1]: libpod-0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258.scope: Deactivated successfully.
Oct 10 09:43:42 compute-0 podman[74825]: 2025-10-10 09:43:42.344237328 +0000 UTC m=+0.026967030 container died 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 10 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f7e5bdade578737b5dfa6add46207a2e7d0b20b55858283a5360b7928815ad-merged.mount: Deactivated successfully.
Oct 10 09:43:42 compute-0 podman[74825]: 2025-10-10 09:43:42.388735555 +0000 UTC m=+0.071465257 container remove 0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258 (image=quay.io/ceph/ceph:v19, name=compassionate_ellis, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:43:42 compute-0 systemd[1]: libpod-conmon-0844e0653936e54e3a62364ce0f603aca92366462247ca5b525521972d437258.scope: Deactivated successfully.
Oct 10 09:43:42 compute-0 podman[74840]: 2025-10-10 09:43:42.49681462 +0000 UTC m=+0.070768963 container create 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:42 compute-0 systemd[1]: Started libpod-conmon-709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33.scope.
Oct 10 09:43:42 compute-0 podman[74840]: 2025-10-10 09:43:42.468420272 +0000 UTC m=+0.042374665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e1148ac73555bb46bbbb8c6929f434b201762a67c9abc46c014bbcde939bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e1148ac73555bb46bbbb8c6929f434b201762a67c9abc46c014bbcde939bf1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e1148ac73555bb46bbbb8c6929f434b201762a67c9abc46c014bbcde939bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:42 compute-0 podman[74840]: 2025-10-10 09:43:42.599895315 +0000 UTC m=+0.173849659 container init 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:42 compute-0 podman[74840]: 2025-10-10 09:43:42.609511953 +0000 UTC m=+0.183466256 container start 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:43:42 compute-0 podman[74840]: 2025-10-10 09:43:42.613102425 +0000 UTC m=+0.187056768 container attach 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:43:42 compute-0 ceph-mon[73551]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:42 compute-0 ceph-mon[73551]: Set ssh ssh_identity_pub
Oct 10 09:43:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:43 compute-0 sshd-session[74882]: Accepted publickey for ceph-admin from 192.168.122.100 port 39110 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:43 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 09:43:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 09:43:43 compute-0 systemd-logind[806]: New session 22 of user ceph-admin.
Oct 10 09:43:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 09:43:43 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 10 09:43:43 compute-0 systemd[74886]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053158 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:43:43 compute-0 sshd-session[74899]: Accepted publickey for ceph-admin from 192.168.122.100 port 39126 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:43 compute-0 systemd-logind[806]: New session 24 of user ceph-admin.
Oct 10 09:43:43 compute-0 systemd[74886]: Queued start job for default target Main User Target.
Oct 10 09:43:43 compute-0 systemd[74886]: Created slice User Application Slice.
Oct 10 09:43:43 compute-0 systemd[74886]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:43:43 compute-0 systemd[74886]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:43:43 compute-0 systemd[74886]: Reached target Paths.
Oct 10 09:43:43 compute-0 systemd[74886]: Reached target Timers.
Oct 10 09:43:43 compute-0 systemd[74886]: Starting D-Bus User Message Bus Socket...
Oct 10 09:43:43 compute-0 systemd[74886]: Starting Create User's Volatile Files and Directories...
Oct 10 09:43:43 compute-0 systemd[74886]: Finished Create User's Volatile Files and Directories.
Oct 10 09:43:43 compute-0 systemd[74886]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:43:43 compute-0 systemd[74886]: Reached target Sockets.
Oct 10 09:43:43 compute-0 systemd[74886]: Reached target Basic System.
Oct 10 09:43:43 compute-0 systemd[74886]: Reached target Main User Target.
Oct 10 09:43:43 compute-0 systemd[74886]: Startup finished in 143ms.
Oct 10 09:43:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 10 09:43:43 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct 10 09:43:43 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 10 09:43:43 compute-0 sshd-session[74882]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:43 compute-0 sshd-session[74899]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:43 compute-0 sudo[74906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:43 compute-0 sudo[74906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:43 compute-0 sudo[74906]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:43 compute-0 sshd-session[74931]: Accepted publickey for ceph-admin from 192.168.122.100 port 39134 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:43 compute-0 systemd-logind[806]: New session 25 of user ceph-admin.
Oct 10 09:43:43 compute-0 ceph-mon[73551]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:43 compute-0 ceph-mon[73551]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:43 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 10 09:43:43 compute-0 sshd-session[74931]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:43 compute-0 sudo[74935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 10 09:43:43 compute-0 sudo[74935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:44 compute-0 sudo[74935]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:44 compute-0 sshd-session[74960]: Accepted publickey for ceph-admin from 192.168.122.100 port 39138 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:44 compute-0 systemd-logind[806]: New session 26 of user ceph-admin.
Oct 10 09:43:44 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:44 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 10 09:43:44 compute-0 sshd-session[74960]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:44 compute-0 sudo[74964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 10 09:43:44 compute-0 sudo[74964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:44 compute-0 sudo[74964]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:44 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 10 09:43:44 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 10 09:43:44 compute-0 sshd-session[74989]: Accepted publickey for ceph-admin from 192.168.122.100 port 39144 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:44 compute-0 systemd-logind[806]: New session 27 of user ceph-admin.
Oct 10 09:43:44 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 10 09:43:44 compute-0 sshd-session[74989]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:44 compute-0 sudo[74993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:44 compute-0 sudo[74993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:44 compute-0 sudo[74993]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:44 compute-0 sshd-session[75018]: Accepted publickey for ceph-admin from 192.168.122.100 port 39158 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:44 compute-0 systemd-logind[806]: New session 28 of user ceph-admin.
Oct 10 09:43:44 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 10 09:43:44 compute-0 sshd-session[75018]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:45 compute-0 sudo[75022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:45 compute-0 sudo[75022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:45 compute-0 sudo[75022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:45 compute-0 ceph-mon[73551]: Deploying cephadm binary to compute-0
Oct 10 09:43:45 compute-0 sshd-session[75047]: Accepted publickey for ceph-admin from 192.168.122.100 port 39174 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:45 compute-0 systemd-logind[806]: New session 29 of user ceph-admin.
Oct 10 09:43:45 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 10 09:43:45 compute-0 sshd-session[75047]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:45 compute-0 sudo[75051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 10 09:43:45 compute-0 sudo[75051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:45 compute-0 sudo[75051]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:45 compute-0 sshd-session[75076]: Accepted publickey for ceph-admin from 192.168.122.100 port 39178 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:45 compute-0 systemd-logind[806]: New session 30 of user ceph-admin.
Oct 10 09:43:45 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 10 09:43:45 compute-0 sshd-session[75076]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:45 compute-0 sudo[75080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:45 compute-0 sudo[75080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:45 compute-0 sudo[75080]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:46 compute-0 sshd-session[75105]: Accepted publickey for ceph-admin from 192.168.122.100 port 39184 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:46 compute-0 systemd-logind[806]: New session 31 of user ceph-admin.
Oct 10 09:43:46 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 10 09:43:46 compute-0 sshd-session[75105]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:46 compute-0 sudo[75109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 10 09:43:46 compute-0 sudo[75109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:46 compute-0 sudo[75109]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:46 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:46 compute-0 sshd-session[75134]: Accepted publickey for ceph-admin from 192.168.122.100 port 39192 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:46 compute-0 systemd-logind[806]: New session 32 of user ceph-admin.
Oct 10 09:43:46 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 10 09:43:46 compute-0 sshd-session[75134]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:47 compute-0 sshd-session[75161]: Accepted publickey for ceph-admin from 192.168.122.100 port 39206 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:47 compute-0 systemd-logind[806]: New session 33 of user ceph-admin.
Oct 10 09:43:47 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 10 09:43:47 compute-0 sshd-session[75161]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:47 compute-0 sudo[75165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 10 09:43:47 compute-0 sudo[75165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:47 compute-0 sudo[75165]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:47 compute-0 sshd-session[75190]: Accepted publickey for ceph-admin from 192.168.122.100 port 39218 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:43:47 compute-0 systemd-logind[806]: New session 34 of user ceph-admin.
Oct 10 09:43:47 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 10 09:43:47 compute-0 sshd-session[75190]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:43:48 compute-0 sudo[75194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 10 09:43:48 compute-0 sudo[75194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:43:48 compute-0 sudo[75194]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:43:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: [cephadm INFO root] Added host compute-0
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 10 09:43:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:43:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:48 compute-0 nervous_nobel[74856]: Added host 'compute-0' with addr '192.168.122.100'
Oct 10 09:43:48 compute-0 systemd[1]: libpod-709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33.scope: Deactivated successfully.
Oct 10 09:43:48 compute-0 podman[74840]: 2025-10-10 09:43:48.378285958 +0000 UTC m=+5.952240261 container died 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e1148ac73555bb46bbbb8c6929f434b201762a67c9abc46c014bbcde939bf1-merged.mount: Deactivated successfully.
Oct 10 09:43:48 compute-0 sudo[75241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:48 compute-0 podman[74840]: 2025-10-10 09:43:48.422448623 +0000 UTC m=+5.996402926 container remove 709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33 (image=quay.io/ceph/ceph:v19, name=nervous_nobel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:43:48 compute-0 sudo[75241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:48 compute-0 sudo[75241]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:48 compute-0 systemd[1]: libpod-conmon-709ee4a6c0c7047771b971234ea7bb065afe65875bc15a1319d60df774061f33.scope: Deactivated successfully.
Oct 10 09:43:48 compute-0 sudo[75281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Oct 10 09:43:48 compute-0 sudo[75281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:48 compute-0 podman[75279]: 2025-10-10 09:43:48.496958593 +0000 UTC m=+0.049255690 container create 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:43:48 compute-0 systemd[1]: Started libpod-conmon-649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4.scope.
Oct 10 09:43:48 compute-0 podman[75279]: 2025-10-10 09:43:48.475896426 +0000 UTC m=+0.028193543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff0b5702a018481da5ff7ce62617f968f27648a774ff730e7579654db9f9bd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff0b5702a018481da5ff7ce62617f968f27648a774ff730e7579654db9f9bd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff0b5702a018481da5ff7ce62617f968f27648a774ff730e7579654db9f9bd6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:48 compute-0 podman[75279]: 2025-10-10 09:43:48.610555537 +0000 UTC m=+0.162852654 container init 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:43:48 compute-0 podman[75279]: 2025-10-10 09:43:48.620808087 +0000 UTC m=+0.173105184 container start 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:43:48 compute-0 podman[75279]: 2025-10-10 09:43:48.62501426 +0000 UTC m=+0.177311387 container attach 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 10 09:43:48 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 10 09:43:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:43:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:48 compute-0 romantic_saha[75321]: Scheduled mon update...
Oct 10 09:43:49 compute-0 systemd[1]: libpod-649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75279]: 2025-10-10 09:43:49.016183998 +0000 UTC m=+0.568481095 container died 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff0b5702a018481da5ff7ce62617f968f27648a774ff730e7579654db9f9bd6-merged.mount: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75279]: 2025-10-10 09:43:49.05584058 +0000 UTC m=+0.608137697 container remove 649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4 (image=quay.io/ceph/ceph:v19, name=romantic_saha, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:49 compute-0 systemd[1]: libpod-conmon-649502bff4238fa19bd1fa6ec7ea74407670e24247c419d7d4586c903a0cbcc4.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.142176224 +0000 UTC m=+0.056913082 container create 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:49 compute-0 podman[75339]: 2025-10-10 09:43:49.185567542 +0000 UTC m=+0.479593862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:49 compute-0 systemd[1]: Started libpod-conmon-1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4.scope.
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.115871146 +0000 UTC m=+0.030608014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffd8336dd7a305ac9f5dae619c7d54b8d2e3ecc480658bbfc17f889d09c7d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffd8336dd7a305ac9f5dae619c7d54b8d2e3ecc480658bbfc17f889d09c7d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffd8336dd7a305ac9f5dae619c7d54b8d2e3ecc480658bbfc17f889d09c7d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.238221238 +0000 UTC m=+0.152958126 container init 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.247418321 +0000 UTC m=+0.162155149 container start 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.252459643 +0000 UTC m=+0.167196481 container attach 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.332611127 +0000 UTC m=+0.050778483 container create bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:49 compute-0 ceph-mon[73551]: Added host compute-0
Oct 10 09:43:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:43:49 compute-0 ceph-mon[73551]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:49 compute-0 ceph-mon[73551]: Saving service mon spec with placement count:5
Oct 10 09:43:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:49 compute-0 systemd[1]: Started libpod-conmon-bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6.scope.
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.308853446 +0000 UTC m=+0.027020792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.420591556 +0000 UTC m=+0.138758902 container init bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.425466933 +0000 UTC m=+0.143634269 container start bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.42892689 +0000 UTC m=+0.147094236 container attach bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:49 compute-0 gracious_poincare[75456]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 10 09:43:49 compute-0 systemd[1]: libpod-bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.54975443 +0000 UTC m=+0.267921756 container died bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b1c33f2928d9d1c4955f7321656375d5c4b7fdf160c0794fc85ffa92887f60c-merged.mount: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75421]: 2025-10-10 09:43:49.588565263 +0000 UTC m=+0.306732589 container remove bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6 (image=quay.io/ceph/ceph:v19, name=gracious_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:49 compute-0 sudo[75281]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct 10 09:43:49 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:49 compute-0 systemd[1]: libpod-conmon-bc244d5f7280087ca9a86600e4c1ba92b9be7e2137fa077c7f3e50e925e08db6.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:49 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 10 09:43:49 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 10 09:43:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:43:49 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:49 compute-0 wonderful_dhawan[75402]: Scheduled mgr update...
Oct 10 09:43:49 compute-0 sudo[75473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:49 compute-0 sudo[75473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:49 compute-0 sudo[75473]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:49 compute-0 systemd[1]: libpod-1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.74214576 +0000 UTC m=+0.656882628 container died 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-daffd8336dd7a305ac9f5dae619c7d54b8d2e3ecc480658bbfc17f889d09c7d1-merged.mount: Deactivated successfully.
Oct 10 09:43:49 compute-0 podman[75386]: 2025-10-10 09:43:49.797454046 +0000 UTC m=+0.712190914 container remove 1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4 (image=quay.io/ceph/ceph:v19, name=wonderful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:43:49 compute-0 systemd[1]: libpod-conmon-1e1a9e38c7084e5133a2f75b2f1fab1d8ef05904adf43eb1fec1d9cd122043c4.scope: Deactivated successfully.
Oct 10 09:43:49 compute-0 sudo[75501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 09:43:49 compute-0 sudo[75501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:49 compute-0 podman[75534]: 2025-10-10 09:43:49.868911682 +0000 UTC m=+0.050652328 container create 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:43:49 compute-0 systemd[1]: Started libpod-conmon-52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2.scope.
Oct 10 09:43:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:49 compute-0 podman[75534]: 2025-10-10 09:43:49.845809684 +0000 UTC m=+0.027550310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf14d5bab01137fc5e19f6288a8866703e7920af1cd91e7e2085c537d85f208/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf14d5bab01137fc5e19f6288a8866703e7920af1cd91e7e2085c537d85f208/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf14d5bab01137fc5e19f6288a8866703e7920af1cd91e7e2085c537d85f208/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:49 compute-0 podman[75534]: 2025-10-10 09:43:49.959499101 +0000 UTC m=+0.141239767 container init 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:49 compute-0 podman[75534]: 2025-10-10 09:43:49.970112193 +0000 UTC m=+0.151852819 container start 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:49 compute-0 podman[75534]: 2025-10-10 09:43:49.974714799 +0000 UTC m=+0.156455435 container attach 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:50 compute-0 sudo[75501]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 sudo[75596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:50 compute-0 sudo[75596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:50 compute-0 sudo[75596]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:50 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:50 compute-0 sudo[75621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:43:50 compute-0 sudo[75621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:50 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service crash spec with placement *
Oct 10 09:43:50 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 10 09:43:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:43:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 tender_meitner[75551]: Scheduled crash update...
Oct 10 09:43:50 compute-0 systemd[1]: libpod-52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2.scope: Deactivated successfully.
Oct 10 09:43:50 compute-0 podman[75534]: 2025-10-10 09:43:50.411479402 +0000 UTC m=+0.593220058 container died 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf14d5bab01137fc5e19f6288a8866703e7920af1cd91e7e2085c537d85f208-merged.mount: Deactivated successfully.
Oct 10 09:43:50 compute-0 podman[75534]: 2025-10-10 09:43:50.454785989 +0000 UTC m=+0.636526615 container remove 52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2 (image=quay.io/ceph/ceph:v19, name=tender_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:50 compute-0 systemd[1]: libpod-conmon-52d58885536d64c1a00caa1b8433b4ef396d869fa77b254d4c143b92262513f2.scope: Deactivated successfully.
Oct 10 09:43:50 compute-0 podman[75661]: 2025-10-10 09:43:50.533683569 +0000 UTC m=+0.054208770 container create 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:50 compute-0 systemd[1]: Started libpod-conmon-577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9.scope.
Oct 10 09:43:50 compute-0 podman[75661]: 2025-10-10 09:43:50.510361703 +0000 UTC m=+0.030886944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eca6669d1017a6348d043e3703e9778d63e85134359c29d585bb1611c4e7fdd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eca6669d1017a6348d043e3703e9778d63e85134359c29d585bb1611c4e7fdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eca6669d1017a6348d043e3703e9778d63e85134359c29d585bb1611c4e7fdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:50 compute-0 podman[75661]: 2025-10-10 09:43:50.633458371 +0000 UTC m=+0.153983682 container init 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:50 compute-0 podman[75661]: 2025-10-10 09:43:50.643079789 +0000 UTC m=+0.163604980 container start 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:50 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 ceph-mon[73551]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:50 compute-0 ceph-mon[73551]: Saving service mgr spec with placement count:2
Oct 10 09:43:50 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:50 compute-0 podman[75661]: 2025-10-10 09:43:50.65015971 +0000 UTC m=+0.170684971 container attach 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:43:50 compute-0 podman[75771]: 2025-10-10 09:43:50.949409343 +0000 UTC m=+0.071434906 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:43:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct 10 09:43:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3046049042' entity='client.admin' 
Oct 10 09:43:51 compute-0 systemd[1]: libpod-577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9.scope: Deactivated successfully.
Oct 10 09:43:51 compute-0 podman[75661]: 2025-10-10 09:43:51.036869746 +0000 UTC m=+0.557394947 container died 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 09:43:51 compute-0 podman[75771]: 2025-10-10 09:43:51.068180153 +0000 UTC m=+0.190205676 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eca6669d1017a6348d043e3703e9778d63e85134359c29d585bb1611c4e7fdd-merged.mount: Deactivated successfully.
Oct 10 09:43:51 compute-0 podman[75661]: 2025-10-10 09:43:51.104610805 +0000 UTC m=+0.625136016 container remove 577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9 (image=quay.io/ceph/ceph:v19, name=frosty_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:51 compute-0 systemd[1]: libpod-conmon-577aa1fe2553be804e7cad9d830ddd9a905b52a0f7a26e66d2a17ec3e2f19ef9.scope: Deactivated successfully.
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.182027805 +0000 UTC m=+0.048728862 container create 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:43:51 compute-0 systemd[1]: Started libpod-conmon-6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304.scope.
Oct 10 09:43:51 compute-0 sudo[75621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.159681973 +0000 UTC m=+0.026383020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5292fe3bef0ab2f1784e03ddcdaa9c2cb034c6a03962c17b95f189d303049361/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5292fe3bef0ab2f1784e03ddcdaa9c2cb034c6a03962c17b95f189d303049361/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5292fe3bef0ab2f1784e03ddcdaa9c2cb034c6a03962c17b95f189d303049361/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.272990987 +0000 UTC m=+0.139692064 container init 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.279203208 +0000 UTC m=+0.145904225 container start 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.282606714 +0000 UTC m=+0.149307841 container attach 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:51 compute-0 sudo[75852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:51 compute-0 sudo[75852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:51 compute-0 sudo[75852]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:51 compute-0 sudo[75879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:43:51 compute-0 sudo[75879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct 10 09:43:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:51 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 75936 (sysctl)
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.631232081 +0000 UTC m=+0.497933128 container died 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:51 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 10 09:43:51 compute-0 systemd[1]: libpod-6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304.scope: Deactivated successfully.
Oct 10 09:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5292fe3bef0ab2f1784e03ddcdaa9c2cb034c6a03962c17b95f189d303049361-merged.mount: Deactivated successfully.
Oct 10 09:43:51 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 10 09:43:51 compute-0 podman[75817]: 2025-10-10 09:43:51.674500356 +0000 UTC m=+0.541201373 container remove 6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304 (image=quay.io/ceph/ceph:v19, name=angry_nobel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:51 compute-0 systemd[1]: libpod-conmon-6a62aaeab27ddd99172824735e0c7928a09c54c1691396b404db5bfae18b3304.scope: Deactivated successfully.
Oct 10 09:43:51 compute-0 podman[75951]: 2025-10-10 09:43:51.733383334 +0000 UTC m=+0.039336182 container create d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:43:51 compute-0 systemd[1]: Started libpod-conmon-d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd.scope.
Oct 10 09:43:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2b938c481885701795296dae2442ab9be887643b8c5882b2774f23fac07a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2b938c481885701795296dae2442ab9be887643b8c5882b2774f23fac07a42/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2b938c481885701795296dae2442ab9be887643b8c5882b2774f23fac07a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:51 compute-0 podman[75951]: 2025-10-10 09:43:51.716513889 +0000 UTC m=+0.022466757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:51 compute-0 podman[75951]: 2025-10-10 09:43:51.824596344 +0000 UTC m=+0.130549252 container init d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:43:51 compute-0 podman[75951]: 2025-10-10 09:43:51.833689464 +0000 UTC m=+0.139642312 container start d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:43:51 compute-0 podman[75951]: 2025-10-10 09:43:51.837054439 +0000 UTC m=+0.143007347 container attach d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:43:51 compute-0 sudo[75879]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:52 compute-0 ceph-mon[73551]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:52 compute-0 ceph-mon[73551]: Saving service crash spec with placement *
Oct 10 09:43:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3046049042' entity='client.admin' 
Oct 10 09:43:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:52 compute-0 sudo[76010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:52 compute-0 sudo[76010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:52 compute-0 sudo[76010]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:52 compute-0 sudo[76035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:43:52 compute-0 sudo[76035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:52 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:43:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:52 compute-0 ceph-mgr[73845]: [cephadm INFO root] Added label _admin to host compute-0
Oct 10 09:43:52 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 10 09:43:52 compute-0 focused_kalam[75972]: Added label _admin to host compute-0
Oct 10 09:43:52 compute-0 systemd[1]: libpod-d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd.scope: Deactivated successfully.
Oct 10 09:43:52 compute-0 podman[75951]: 2025-10-10 09:43:52.227804833 +0000 UTC m=+0.533757681 container died d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:52 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca2b938c481885701795296dae2442ab9be887643b8c5882b2774f23fac07a42-merged.mount: Deactivated successfully.
Oct 10 09:43:52 compute-0 podman[75951]: 2025-10-10 09:43:52.27643889 +0000 UTC m=+0.582391778 container remove d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd (image=quay.io/ceph/ceph:v19, name=focused_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:43:52 compute-0 systemd[1]: libpod-conmon-d89c7a1008bac4f043418bf5176c8aa3b64e83d8558098bd6406e1f0427e5cfd.scope: Deactivated successfully.
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.3471074 +0000 UTC m=+0.048971871 container create d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:43:52 compute-0 systemd[1]: Started libpod-conmon-d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402.scope.
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.321039621 +0000 UTC m=+0.022904102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4034dd61a6a51b787ad8287225b2871c73fe3c8528b0c6e67ed09b0458e6eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4034dd61a6a51b787ad8287225b2871c73fe3c8528b0c6e67ed09b0458e6eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4034dd61a6a51b787ad8287225b2871c73fe3c8528b0c6e67ed09b0458e6eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.436032662 +0000 UTC m=+0.137897153 container init d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.446709836 +0000 UTC m=+0.148574317 container start d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.450685552 +0000 UTC m=+0.152550023 container attach d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:52 compute-0 sudo[76035]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:52 compute-0 sudo[76112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:52 compute-0 sudo[76112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:52 compute-0 sudo[76112]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:52 compute-0 sudo[76156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 09:43:52 compute-0 sudo[76156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct 10 09:43:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4139422049' entity='client.admin' 
Oct 10 09:43:52 compute-0 nice_davinci[76097]: set mgr/dashboard/cluster/status
Oct 10 09:43:52 compute-0 systemd[1]: libpod-d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402.scope: Deactivated successfully.
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.95099783 +0000 UTC m=+0.652862401 container died d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc4034dd61a6a51b787ad8287225b2871c73fe3c8528b0c6e67ed09b0458e6eb-merged.mount: Deactivated successfully.
Oct 10 09:43:52 compute-0 podman[76073]: 2025-10-10 09:43:52.998423437 +0000 UTC m=+0.700287908 container remove d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402 (image=quay.io/ceph/ceph:v19, name=nice_davinci, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:43:53 compute-0 systemd[1]: libpod-conmon-d807640f830defa5656586226120c5e6ac4214cbfe3269337df4666ee0ccc402.scope: Deactivated successfully.
Oct 10 09:43:53 compute-0 ceph-mon[73551]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4139422049' entity='client.admin' 
Oct 10 09:43:53 compute-0 sudo[72508]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:53 compute-0 podman[76237]: 2025-10-10 09:43:53.099503284 +0000 UTC m=+0.043698541 container create 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:43:53 compute-0 systemd[1]: Started libpod-conmon-67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624.scope.
Oct 10 09:43:53 compute-0 podman[76237]: 2025-10-10 09:43:53.084348748 +0000 UTC m=+0.028544005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:43:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:53 compute-0 podman[76237]: 2025-10-10 09:43:53.198094635 +0000 UTC m=+0.142289902 container init 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:53 compute-0 podman[76237]: 2025-10-10 09:43:53.208169199 +0000 UTC m=+0.152364426 container start 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:43:53 compute-0 hopeful_kare[76253]: 167 167
Oct 10 09:43:53 compute-0 systemd[1]: libpod-67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624.scope: Deactivated successfully.
Oct 10 09:43:53 compute-0 podman[76237]: 2025-10-10 09:43:53.211376068 +0000 UTC m=+0.155571305 container attach 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:53 compute-0 podman[76258]: 2025-10-10 09:43:53.274447839 +0000 UTC m=+0.040179571 container died 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb5fe6888c8b02befe1f0f7eaaff96369f8d5cecd95e27a7b5cbe9a501449a79-merged.mount: Deactivated successfully.
Oct 10 09:43:53 compute-0 podman[76258]: 2025-10-10 09:43:53.319171624 +0000 UTC m=+0.084903396 container remove 67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:43:53 compute-0 systemd[1]: libpod-conmon-67d5c92775e1f680b419893460888df538ee372619043a904cec587da4eab624.scope: Deactivated successfully.
Oct 10 09:43:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:43:53 compute-0 sudo[76298]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpzdegiwlpxqhyenyzqtglkzsifxsuku ; /usr/bin/python3'
Oct 10 09:43:53 compute-0 sudo[76298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:53 compute-0 python3[76300]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:43:53 compute-0 podman[76306]: 2025-10-10 09:43:53.556013469 +0000 UTC m=+0.060292856 container create ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 09:43:53 compute-0 systemd[1]: Started libpod-conmon-ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0.scope.
Oct 10 09:43:53 compute-0 podman[76319]: 2025-10-10 09:43:53.607764464 +0000 UTC m=+0.039720475 container create 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:43:53 compute-0 podman[76306]: 2025-10-10 09:43:53.521183832 +0000 UTC m=+0.025463249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:43:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcaa79f67c8c3d79399a727a5656a778af2bceb8690aa3673bf6627005c7912d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcaa79f67c8c3d79399a727a5656a778af2bceb8690aa3673bf6627005c7912d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcaa79f67c8c3d79399a727a5656a778af2bceb8690aa3673bf6627005c7912d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcaa79f67c8c3d79399a727a5656a778af2bceb8690aa3673bf6627005c7912d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 systemd[1]: Started libpod-conmon-95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421.scope.
Oct 10 09:43:53 compute-0 podman[76306]: 2025-10-10 09:43:53.660187911 +0000 UTC m=+0.164467278 container init ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e3c702204740d8a379f63a06b1e7bf2b91e26fb46c32f3051d0189af298f98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e3c702204740d8a379f63a06b1e7bf2b91e26fb46c32f3051d0189af298f98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:53 compute-0 podman[76306]: 2025-10-10 09:43:53.675364989 +0000 UTC m=+0.179644336 container start ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:53 compute-0 podman[76306]: 2025-10-10 09:43:53.679437408 +0000 UTC m=+0.183716855 container attach ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:43:53 compute-0 podman[76319]: 2025-10-10 09:43:53.591763128 +0000 UTC m=+0.023719149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:53 compute-0 podman[76319]: 2025-10-10 09:43:53.68979908 +0000 UTC m=+0.121755091 container init 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:43:53 compute-0 podman[76319]: 2025-10-10 09:43:53.697832565 +0000 UTC m=+0.129788576 container start 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:43:53 compute-0 podman[76319]: 2025-10-10 09:43:53.701132547 +0000 UTC m=+0.133088588 container attach 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:43:54 compute-0 ceph-mon[73551]: Added label _admin to host compute-0
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4053444168' entity='client.admin' 
Oct 10 09:43:54 compute-0 systemd[1]: libpod-95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421.scope: Deactivated successfully.
Oct 10 09:43:54 compute-0 podman[76382]: 2025-10-10 09:43:54.176282869 +0000 UTC m=+0.055607338 container died 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e3c702204740d8a379f63a06b1e7bf2b91e26fb46c32f3051d0189af298f98-merged.mount: Deactivated successfully.
Oct 10 09:43:54 compute-0 podman[76382]: 2025-10-10 09:43:54.232529356 +0000 UTC m=+0.111853775 container remove 95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421 (image=quay.io/ceph/ceph:v19, name=goofy_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:43:54 compute-0 systemd[1]: libpod-conmon-95876679ee6005cfbbbdea4f798adb8e33ae377834ddfe5ed0149c586a6c7421.scope: Deactivated successfully.
Oct 10 09:43:54 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:54 compute-0 sudo[76298]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 happy_dewdney[76334]: [
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:     {
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "available": false,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "being_replaced": false,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "ceph_device_lvm": false,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "lsm_data": {},
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "lvs": [],
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "path": "/dev/sr0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "rejected_reasons": [
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "Has a FileSystem",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "Insufficient space (<5GB)"
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         ],
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         "sys_api": {
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "actuators": null,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "device_nodes": [
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:                 "sr0"
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             ],
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "devname": "sr0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "human_readable_size": "482.00 KB",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "id_bus": "ata",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "model": "QEMU DVD-ROM",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "nr_requests": "2",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "parent": "/dev/sr0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "partitions": {},
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "path": "/dev/sr0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "removable": "1",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "rev": "2.5+",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "ro": "0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "rotational": "0",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "sas_address": "",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "sas_device_handle": "",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "scheduler_mode": "mq-deadline",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "sectors": 0,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "sectorsize": "2048",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "size": 493568.0,
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "support_discard": "2048",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "type": "disk",
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:             "vendor": "QEMU"
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:         }
Oct 10 09:43:54 compute-0 happy_dewdney[76334]:     }
Oct 10 09:43:54 compute-0 happy_dewdney[76334]: ]
Oct 10 09:43:54 compute-0 systemd[1]: libpod-ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0.scope: Deactivated successfully.
Oct 10 09:43:54 compute-0 conmon[76334]: conmon ff87f4e23adbaa3abf87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0.scope/container/memory.events
Oct 10 09:43:54 compute-0 podman[77406]: 2025-10-10 09:43:54.534693549 +0000 UTC m=+0.027724127 container died ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcaa79f67c8c3d79399a727a5656a778af2bceb8690aa3673bf6627005c7912d-merged.mount: Deactivated successfully.
Oct 10 09:43:54 compute-0 podman[77406]: 2025-10-10 09:43:54.572069923 +0000 UTC m=+0.065100481 container remove ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:43:54 compute-0 systemd[1]: libpod-conmon-ff87f4e23adbaa3abf8778378aa0605c1d90f48adbcc3dab89f314e72e48c2b0.scope: Deactivated successfully.
Oct 10 09:43:54 compute-0 sudo[76156]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:43:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:43:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:43:54 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:43:54 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:43:54 compute-0 sudo[77419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:43:54 compute-0 sudo[77419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:54 compute-0 sudo[77419]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 sudo[77468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:43:54 compute-0 sudo[77468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:54 compute-0 sudo[77468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 sudo[77522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:43:54 compute-0 sudo[77522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:54 compute-0 sudo[77522]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 sudo[77569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:54 compute-0 sudo[77569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:54 compute-0 sudo[77569]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:54 compute-0 sudo[77594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:43:54 compute-0 sudo[77594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:54 compute-0 sudo[77594]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4053444168' entity='client.admin' 
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:43:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:43:55 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:43:55 compute-0 sudo[77665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77665]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77713]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicbhtjpwhbtdwwnchfhwakfkbicqtgb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089434.7080166-33735-35653421391526/async_wrapper.py j949465154529 30 /home/zuul/.ansible/tmp/ansible-tmp-1760089434.7080166-33735-35653421391526/AnsiballZ_command.py _'
Oct 10 09:43:55 compute-0 sudo[77766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:55 compute-0 sudo[77764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:43:55 compute-0 sudo[77764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77764]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:43:55 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:43:55 compute-0 sudo[77792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:43:55 compute-0 sudo[77792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77792]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 ansible-async_wrapper.py[77778]: Invoked with j949465154529 30 /home/zuul/.ansible/tmp/ansible-tmp-1760089434.7080166-33735-35653421391526/AnsiballZ_command.py _
Oct 10 09:43:55 compute-0 ansible-async_wrapper.py[77842]: Starting module and watcher
Oct 10 09:43:55 compute-0 ansible-async_wrapper.py[77842]: Start watching 77843 (30)
Oct 10 09:43:55 compute-0 ansible-async_wrapper.py[77843]: Start module (77843)
Oct 10 09:43:55 compute-0 ansible-async_wrapper.py[77778]: Return async_wrapper task started.
Oct 10 09:43:55 compute-0 sudo[77817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:43:55 compute-0 sudo[77817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77766]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77817]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77847]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:55 compute-0 sudo[77872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 python3[77844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:43:55 compute-0 sudo[77872]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 podman[77897]: 2025-10-10 09:43:55.684667669 +0000 UTC m=+0.057127639 container create f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 09:43:55 compute-0 sudo[77898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 systemd[1]: Started libpod-conmon-f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3.scope.
Oct 10 09:43:55 compute-0 podman[77897]: 2025-10-10 09:43:55.656378854 +0000 UTC m=+0.028838924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6a6875e31c5d44df5043d58a5c0639f30c54b1b1b42c54eb4e87ab83d1c66a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6a6875e31c5d44df5043d58a5c0639f30c54b1b1b42c54eb4e87ab83d1c66a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:55 compute-0 podman[77897]: 2025-10-10 09:43:55.813396898 +0000 UTC m=+0.185856888 container init f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:43:55 compute-0 podman[77897]: 2025-10-10 09:43:55.825077777 +0000 UTC m=+0.197537747 container start f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:43:55 compute-0 podman[77897]: 2025-10-10 09:43:55.828561555 +0000 UTC m=+0.201021525 container attach f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:55 compute-0 sudo[77963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:55 compute-0 sudo[77989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:43:55 compute-0 sudo[77989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:55 compute-0 sudo[77989]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:43:56 compute-0 sudo[78033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78033]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 sudo[78058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:43:56 compute-0 sudo[78058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:43:56 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 sudo[78083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:43:56 compute-0 sudo[78083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78083]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:43:56 compute-0 nifty_chatterjee[77955]: 
Oct 10 09:43:56 compute-0 nifty_chatterjee[77955]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 09:43:56 compute-0 systemd[1]: libpod-f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3.scope: Deactivated successfully.
Oct 10 09:43:56 compute-0 podman[77897]: 2025-10-10 09:43:56.214906068 +0000 UTC m=+0.587366038 container died f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:43:56 compute-0 sudo[78108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78108]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6a6875e31c5d44df5043d58a5c0639f30c54b1b1b42c54eb4e87ab83d1c66a-merged.mount: Deactivated successfully.
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 09:43:56 compute-0 podman[77897]: 2025-10-10 09:43:56.259807059 +0000 UTC m=+0.632267029 container remove f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3 (image=quay.io/ceph/ceph:v19, name=nifty_chatterjee, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:43:56 compute-0 systemd[1]: libpod-conmon-f1db7c26808eacdd2f717bec5faeff82cfa67423ba4e850bdd3c655dcbcd97d3.scope: Deactivated successfully.
Oct 10 09:43:56 compute-0 ansible-async_wrapper.py[77843]: Module complete (77843)
Oct 10 09:43:56 compute-0 sudo[78142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:56 compute-0 sudo[78142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78142]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78171]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78219]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78244]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 sudo[78292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78292]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:43:56 compute-0 sudo[78317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:43:56 compute-0 sudo[78317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78317]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:43:56 compute-0 sudo[78342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78342]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78407]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdreimxadebavuzkkyebhfxmcrpyjti ; /usr/bin/python3'
Oct 10 09:43:56 compute-0 sudo[78407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:56 compute-0 sudo[78375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78375]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:56 compute-0 sudo[78418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78418]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 sudo[78443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:43:56 compute-0 sudo[78443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:56 compute-0 sudo[78443]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:56 compute-0 python3[78415]: ansible-ansible.legacy.async_status Invoked with jid=j949465154529.77778 mode=status _async_dir=/root/.ansible_async
Oct 10 09:43:56 compute-0 sudo[78407]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 sudo[78496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:43:57 compute-0 sudo[78496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:57 compute-0 sudo[78496]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 sudo[78575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbenbspvuqovsvseoaugoyzyfsnzyyh ; /usr/bin/python3'
Oct 10 09:43:57 compute-0 sudo[78575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:57 compute-0 sudo[78551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:43:57 compute-0 ceph-mon[73551]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:43:57 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:43:57 compute-0 sudo[78551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:57 compute-0 sudo[78551]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 sudo[78590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:43:57 compute-0 sudo[78590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:57 compute-0 sudo[78590]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:57 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 3e48023b-bd49-4184-8e1b-c85ae66bc648 (Updating crash deployment (+1 -> 1))
Oct 10 09:43:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:43:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:43:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:43:57 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 10 09:43:57 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 10 09:43:57 compute-0 python3[78587]: ansible-ansible.legacy.async_status Invoked with jid=j949465154529.77778 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 09:43:57 compute-0 sudo[78575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 sudo[78615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:57 compute-0 sudo[78615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:57 compute-0 sudo[78615]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 sudo[78640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:43:57 compute-0 sudo[78640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:57 compute-0 sudo[78712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittybvnyzzuiyusoaejwiukrstgysssm ; /usr/bin/python3'
Oct 10 09:43:57 compute-0 sudo[78712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.763574192 +0000 UTC m=+0.054456498 container create 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:43:57 compute-0 systemd[1]: Started libpod-conmon-50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d.scope.
Oct 10 09:43:57 compute-0 python3[78722]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.735584097 +0000 UTC m=+0.026466453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:43:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.853176947 +0000 UTC m=+0.144059293 container init 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:57 compute-0 sudo[78712]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.86527532 +0000 UTC m=+0.156157626 container start 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:43:57 compute-0 dreamy_kalam[78750]: 167 167
Oct 10 09:43:57 compute-0 systemd[1]: libpod-50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d.scope: Deactivated successfully.
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.869544265 +0000 UTC m=+0.160426571 container attach 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.869963309 +0000 UTC m=+0.160845605 container died 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d692e9ca674171c28a74f2161ce520f01e456475d9d538b9950f057d93d6ac1-merged.mount: Deactivated successfully.
Oct 10 09:43:57 compute-0 podman[78733]: 2025-10-10 09:43:57.924558861 +0000 UTC m=+0.215441157 container remove 50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kalam, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 09:43:57 compute-0 systemd[1]: libpod-conmon-50b85b7f4419358e5d3d55200363d279b8ce622692189a3e7291d2d3876f139d.scope: Deactivated successfully.
Oct 10 09:43:57 compute-0 systemd[1]: Reloading.
Oct 10 09:43:58 compute-0 systemd-rc-local-generator[78797]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:58 compute-0 systemd-sysv-generator[78800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:43:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:43:58 compute-0 ceph-mon[73551]: Deploying daemon crash.compute-0 on compute-0
Oct 10 09:43:58 compute-0 ceph-mgr[73845]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 10 09:43:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:43:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 10 09:43:58 compute-0 sudo[78829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoppbixsqjjcubjmqgplubfefnujmxon ; /usr/bin/python3'
Oct 10 09:43:58 compute-0 sudo[78829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:43:58 compute-0 systemd[1]: Reloading.
Oct 10 09:43:58 compute-0 systemd-rc-local-generator[78862]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:43:58 compute-0 systemd-sysv-generator[78865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:43:58 compute-0 python3[78834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:43:58 compute-0 podman[78870]: 2025-10-10 09:43:58.554131907 +0000 UTC m=+0.047393047 container create 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:43:58 compute-0 systemd[1]: Started libpod-conmon-0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e.scope.
Oct 10 09:43:58 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:43:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:58 compute-0 podman[78870]: 2025-10-10 09:43:58.530066856 +0000 UTC m=+0.023328056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79df17cb373e8d7acd498d865082f0c6cffe4ce1011e2c6aa97509db8433320/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79df17cb373e8d7acd498d865082f0c6cffe4ce1011e2c6aa97509db8433320/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79df17cb373e8d7acd498d865082f0c6cffe4ce1011e2c6aa97509db8433320/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 podman[78870]: 2025-10-10 09:43:58.649179728 +0000 UTC m=+0.142440848 container init 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:43:58 compute-0 podman[78870]: 2025-10-10 09:43:58.657102408 +0000 UTC m=+0.150363518 container start 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:58 compute-0 podman[78870]: 2025-10-10 09:43:58.660407311 +0000 UTC m=+0.153668511 container attach 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:43:58 compute-0 podman[78951]: 2025-10-10 09:43:58.860157252 +0000 UTC m=+0.054212630 container create b09e35c7466072fadfc98236d44201460ccc615f3ee028acf1ef38a620f50b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82727117fdf34f96b3d7c3857a46200aeab004f902ed143ca34ad3491cd52063/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82727117fdf34f96b3d7c3857a46200aeab004f902ed143ca34ad3491cd52063/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82727117fdf34f96b3d7c3857a46200aeab004f902ed143ca34ad3491cd52063/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82727117fdf34f96b3d7c3857a46200aeab004f902ed143ca34ad3491cd52063/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:58 compute-0 podman[78951]: 2025-10-10 09:43:58.919894788 +0000 UTC m=+0.113950226 container init b09e35c7466072fadfc98236d44201460ccc615f3ee028acf1ef38a620f50b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:43:58 compute-0 podman[78951]: 2025-10-10 09:43:58.841955541 +0000 UTC m=+0.036010949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:43:58 compute-0 podman[78951]: 2025-10-10 09:43:58.93518786 +0000 UTC m=+0.129243248 container start b09e35c7466072fadfc98236d44201460ccc615f3ee028acf1ef38a620f50b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:43:58 compute-0 bash[78951]: b09e35c7466072fadfc98236d44201460ccc615f3ee028acf1ef38a620f50b45
Oct 10 09:43:58 compute-0 systemd[1]: Started Ceph crash.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:43:59 compute-0 sudo[78640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:43:59 compute-0 magical_mayer[78887]: 
Oct 10 09:43:59 compute-0 magical_mayer[78887]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:43:59 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 3e48023b-bd49-4184-8e1b-c85ae66bc648 (Updating crash deployment (+1 -> 1))
Oct 10 09:43:59 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 3e48023b-bd49-4184-8e1b-c85ae66bc648 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:43:59 compute-0 systemd[1]: libpod-0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e.scope: Deactivated successfully.
Oct 10 09:43:59 compute-0 podman[78870]: 2025-10-10 09:43:59.061149575 +0000 UTC m=+0.554410765 container died 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:43:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d79df17cb373e8d7acd498d865082f0c6cffe4ce1011e2c6aa97509db8433320-merged.mount: Deactivated successfully.
Oct 10 09:43:59 compute-0 podman[78870]: 2025-10-10 09:43:59.109003817 +0000 UTC m=+0.602264937 container remove 0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e (image=quay.io/ceph/ceph:v19, name=magical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:43:59 compute-0 systemd[1]: libpod-conmon-0ec1ba1d45ba2b0b97c74caa03aca9c8cb683b85a753eff4a8108ddf6327193e.scope: Deactivated successfully.
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.127+0000 7f86c19ea640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.127+0000 7f86c19ea640 -1 AuthRegistry(0x7f86bc069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.128+0000 7f86c19ea640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.128+0000 7f86c19ea640 -1 AuthRegistry(0x7f86c19e8ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.128+0000 7f86baffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: 2025-10-10T09:43:59.129+0000 7f86c19ea640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 10 09:43:59 compute-0 sudo[78829]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 10 09:43:59 compute-0 sudo[78989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:43:59 compute-0 sudo[78989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:59 compute-0 sudo[78989]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:59 compute-0 ceph-mon[73551]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:43:59 compute-0 ceph-mon[73551]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:43:59 compute-0 sudo[79028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:43:59 compute-0 sudo[79028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:59 compute-0 sudo[79028]: pam_unix(sudo:session): session closed for user root
Oct 10 09:43:59 compute-0 sudo[79053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:43:59 compute-0 sudo[79053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:43:59 compute-0 sudo[79101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltybgxuslakccjdrkpgjslueertsppl ; /usr/bin/python3'
Oct 10 09:43:59 compute-0 sudo[79101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:43:59 compute-0 python3[79111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:43:59 compute-0 podman[79145]: 2025-10-10 09:43:59.732838607 +0000 UTC m=+0.041391532 container create ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:43:59 compute-0 systemd[1]: Started libpod-conmon-ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d.scope.
Oct 10 09:43:59 compute-0 podman[79145]: 2025-10-10 09:43:59.712263635 +0000 UTC m=+0.020816600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:43:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07923000d1ed4478ba9c3ef13ad417080cc2bb5a0c621538350ad92b9cc368b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07923000d1ed4478ba9c3ef13ad417080cc2bb5a0c621538350ad92b9cc368b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07923000d1ed4478ba9c3ef13ad417080cc2bb5a0c621538350ad92b9cc368b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:43:59 compute-0 podman[79145]: 2025-10-10 09:43:59.829794822 +0000 UTC m=+0.138347787 container init ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:43:59 compute-0 podman[79145]: 2025-10-10 09:43:59.837202816 +0000 UTC m=+0.145755751 container start ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:43:59 compute-0 podman[79145]: 2025-10-10 09:43:59.841973018 +0000 UTC m=+0.150526033 container attach ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:43:59 compute-0 podman[79197]: 2025-10-10 09:43:59.959701792 +0000 UTC m=+0.084217222 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:44:00 compute-0 podman[79197]: 2025-10-10 09:44:00.055179418 +0000 UTC m=+0.179694818 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:44:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/603968288' entity='client.admin' 
Oct 10 09:44:00 compute-0 systemd[1]: libpod-ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d.scope: Deactivated successfully.
Oct 10 09:44:00 compute-0 podman[79145]: 2025-10-10 09:44:00.301997993 +0000 UTC m=+0.610550918 container died ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07923000d1ed4478ba9c3ef13ad417080cc2bb5a0c621538350ad92b9cc368b-merged.mount: Deactivated successfully.
Oct 10 09:44:00 compute-0 podman[79145]: 2025-10-10 09:44:00.348224019 +0000 UTC m=+0.656776944 container remove ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d (image=quay.io/ceph/ceph:v19, name=festive_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:44:00 compute-0 systemd[1]: libpod-conmon-ef5cf4476c7ff8ef739e27a61ed8f349e49d07c15ea357ce2fe163cb9897f03d.scope: Deactivated successfully.
Oct 10 09:44:00 compute-0 sudo[79101]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:00 compute-0 sudo[79053]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 sudo[79299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:44:00 compute-0 sudo[79299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:00 compute-0 sudo[79299]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:00 compute-0 ansible-async_wrapper.py[77842]: Done in kid B.
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 09:44:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:44:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:44:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:44:00 compute-0 sudo[79355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbbktdnejbagmykgnkfvzsutmxgmhjoh ; /usr/bin/python3'
Oct 10 09:44:00 compute-0 sudo[79355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:44:00 compute-0 sudo[79337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:00 compute-0 sudo[79337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:00 compute-0 sudo[79337]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:00 compute-0 sudo[79375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:00 compute-0 sudo[79375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:00 compute-0 python3[79370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:44:00 compute-0 podman[79400]: 2025-10-10 09:44:00.712162158 +0000 UTC m=+0.034762817 container create 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:00 compute-0 systemd[1]: Started libpod-conmon-8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed.scope.
Oct 10 09:44:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7faeebff7f58eea0837e94e18733411b78cc6e941345a928871568b9f7be3e35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7faeebff7f58eea0837e94e18733411b78cc6e941345a928871568b9f7be3e35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7faeebff7f58eea0837e94e18733411b78cc6e941345a928871568b9f7be3e35/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:00 compute-0 podman[79400]: 2025-10-10 09:44:00.696832945 +0000 UTC m=+0.019433634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:00 compute-0 podman[79400]: 2025-10-10 09:44:00.798387598 +0000 UTC m=+0.120988297 container init 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:44:00 compute-0 podman[79400]: 2025-10-10 09:44:00.803567184 +0000 UTC m=+0.126167843 container start 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:44:00 compute-0 podman[79400]: 2025-10-10 09:44:00.810374837 +0000 UTC m=+0.132975536 container attach 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 09:44:00 compute-0 podman[79436]: 2025-10-10 09:44:00.940477232 +0000 UTC m=+0.056820097 container create 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 09:44:00 compute-0 systemd[1]: Started libpod-conmon-9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04.scope.
Oct 10 09:44:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:00.918968719 +0000 UTC m=+0.035311604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:01.025882395 +0000 UTC m=+0.142225280 container init 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:01.032674207 +0000 UTC m=+0.149017082 container start 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:01.036939692 +0000 UTC m=+0.153282627 container attach 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:44:01 compute-0 systemd[1]: libpod-9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 naughty_burnell[79471]: 167 167
Oct 10 09:44:01 compute-0 conmon[79471]: conmon 9a4e5c99664ff0166385 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04.scope/container/memory.events
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:01.040175752 +0000 UTC m=+0.156518647 container died 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 09:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-08ee2c7e1c62ff057d2acee43a5d1f6b10872d03406deca2adf4c56428a85202-merged.mount: Deactivated successfully.
Oct 10 09:44:01 compute-0 podman[79436]: 2025-10-10 09:44:01.091489752 +0000 UTC m=+0.207832597 container remove 9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04 (image=quay.io/ceph/ceph:v19, name=naughty_burnell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:01 compute-0 systemd[1]: libpod-conmon-9a4e5c99664ff0166385cc620183e78a906798083a698990f313eeae455d8f04.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct 10 09:44:01 compute-0 sudo[79375]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/181932060' entity='client.admin' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.xkdepb (unknown last config time)...
Oct 10 09:44:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.xkdepb (unknown last config time)...
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:44:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:44:01 compute-0 systemd[1]: libpod-8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 podman[79400]: 2025-10-10 09:44:01.173236669 +0000 UTC m=+0.495837408 container died 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7faeebff7f58eea0837e94e18733411b78cc6e941345a928871568b9f7be3e35-merged.mount: Deactivated successfully.
Oct 10 09:44:01 compute-0 podman[79400]: 2025-10-10 09:44:01.221419902 +0000 UTC m=+0.544020601 container remove 8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed (image=quay.io/ceph/ceph:v19, name=amazing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:01 compute-0 systemd[1]: libpod-conmon-8ea3de302ddec2271ca97e1a7918c557ab9ed16d9bf442971074956c5c06aeed.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 sudo[79355]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:01 compute-0 sudo[79491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:01 compute-0 sudo[79491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:01 compute-0 sudo[79491]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:01 compute-0 ceph-mon[73551]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/603968288' entity='client.admin' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/181932060' entity='client.admin' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:01 compute-0 sudo[79528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:01 compute-0 sudo[79528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:01 compute-0 sudo[79576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutlsjoazbbblgvbycbrnfcrnyijntob ; /usr/bin/python3'
Oct 10 09:44:01 compute-0 sudo[79576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:44:01 compute-0 python3[79578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.671373293 +0000 UTC m=+0.064537581 container create 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:01 compute-0 systemd[1]: Started libpod-conmon-88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7.scope.
Oct 10 09:44:01 compute-0 podman[79609]: 2025-10-10 09:44:01.718626036 +0000 UTC m=+0.044256961 container create a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.647303543 +0000 UTC m=+0.040467911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:01 compute-0 systemd[1]: Started libpod-conmon-a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243.scope.
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.742768948 +0000 UTC m=+0.135933246 container init 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.750790222 +0000 UTC m=+0.143954520 container start 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.754375074 +0000 UTC m=+0.147539412 container attach 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:01 compute-0 elastic_babbage[79624]: 167 167
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.7565907 +0000 UTC m=+0.149754988 container died 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct 10 09:44:01 compute-0 systemd[1]: libpod-88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c85d4cf4cbb59ec9611037230ea46a1f36048ae7139e38dc4461931edf21ac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c85d4cf4cbb59ec9611037230ea46a1f36048ae7139e38dc4461931edf21ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c85d4cf4cbb59ec9611037230ea46a1f36048ae7139e38dc4461931edf21ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:01 compute-0 podman[79609]: 2025-10-10 09:44:01.775637919 +0000 UTC m=+0.101268864 container init a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:44:01 compute-0 podman[79609]: 2025-10-10 09:44:01.782454412 +0000 UTC m=+0.108085337 container start a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e682f3e120730631036d5f1f828d2ae6e437ae1cf52054aefd41bf2281f1db24-merged.mount: Deactivated successfully.
Oct 10 09:44:01 compute-0 podman[79609]: 2025-10-10 09:44:01.787437001 +0000 UTC m=+0.113067956 container attach a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:44:01 compute-0 podman[79609]: 2025-10-10 09:44:01.700893871 +0000 UTC m=+0.026524846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:01 compute-0 podman[79595]: 2025-10-10 09:44:01.804800944 +0000 UTC m=+0.197965232 container remove 88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7 (image=quay.io/ceph/ceph:v19, name=elastic_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:44:01 compute-0 systemd[1]: libpod-conmon-88de16b9ce9ade9129d84e078139aefb438e3968c150449065d6f1e2e750c6d7.scope: Deactivated successfully.
Oct 10 09:44:01 compute-0 sudo[79528]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:44:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:01 compute-0 sudo[79656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:44:01 compute-0 sudo[79656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:01 compute-0 sudo[79656]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct 10 09:44:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3386432352' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 10 09:44:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:02 compute-0 ceph-mon[73551]: Reconfiguring mgr.compute-0.xkdepb (unknown last config time)...
Oct 10 09:44:02 compute-0 ceph-mon[73551]: Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3386432352' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 10 09:44:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 10 09:44:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3386432352' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 10 09:44:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 10 09:44:02 compute-0 elegant_chaplygin[79631]: set require_min_compat_client to mimic
Oct 10 09:44:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 10 09:44:02 compute-0 systemd[1]: libpod-a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243.scope: Deactivated successfully.
Oct 10 09:44:02 compute-0 podman[79609]: 2025-10-10 09:44:02.922936067 +0000 UTC m=+1.248567022 container died a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c85d4cf4cbb59ec9611037230ea46a1f36048ae7139e38dc4461931edf21ac-merged.mount: Deactivated successfully.
Oct 10 09:44:02 compute-0 podman[79609]: 2025-10-10 09:44:02.966752002 +0000 UTC m=+1.292382927 container remove a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243 (image=quay.io/ceph/ceph:v19, name=elegant_chaplygin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:44:02 compute-0 systemd[1]: libpod-conmon-a841d060e0dde167cfc68f14a7d50baf53c1b4e12a90edf63ff16e64dc0e2243.scope: Deactivated successfully.
Oct 10 09:44:02 compute-0 sudo[79576]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:03 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 1 completed events
Oct 10 09:44:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:44:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:03 compute-0 sudo[79726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgthbjailwhpdtnueofzyarwtztbajan ; /usr/bin/python3'
Oct 10 09:44:03 compute-0 sudo[79726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:44:03 compute-0 python3[79728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:44:03 compute-0 podman[79729]: 2025-10-10 09:44:03.670391003 +0000 UTC m=+0.057875164 container create ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:03 compute-0 systemd[1]: Started libpod-conmon-ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1.scope.
Oct 10 09:44:03 compute-0 podman[79729]: 2025-10-10 09:44:03.644505361 +0000 UTC m=+0.031989612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:03 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49a40596b81134965f19e8b2f581168d76a0459752c4274abd28e54093d51c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49a40596b81134965f19e8b2f581168d76a0459752c4274abd28e54093d51c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49a40596b81134965f19e8b2f581168d76a0459752c4274abd28e54093d51c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:03 compute-0 podman[79729]: 2025-10-10 09:44:03.759533083 +0000 UTC m=+0.147017294 container init ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:44:03 compute-0 podman[79729]: 2025-10-10 09:44:03.764535633 +0000 UTC m=+0.152019784 container start ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:03 compute-0 podman[79729]: 2025-10-10 09:44:03.767494424 +0000 UTC m=+0.154978595 container attach ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:03 compute-0 ceph-mon[73551]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3386432352' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 10 09:44:03 compute-0 ceph-mon[73551]: osdmap e3: 0 total, 0 up, 0 in
Oct 10 09:44:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:44:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:04 compute-0 sudo[79768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:04 compute-0 sudo[79768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:04 compute-0 sudo[79768]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:04 compute-0 sudo[79793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Oct 10 09:44:04 compute-0 sudo[79793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:04 compute-0 sudo[79793]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 ceph-mgr[73845]: [cephadm INFO root] Added host compute-0
Oct 10 09:44:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:44:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:04 compute-0 sudo[79838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:44:04 compute-0 sudo[79838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:04 compute-0 sudo[79838]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:44:05 compute-0 ceph-mon[73551]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:05 compute-0 ceph-mon[73551]: Added host compute-0
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:06 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct 10 09:44:06 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct 10 09:44:07 compute-0 ceph-mon[73551]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:07 compute-0 ceph-mon[73551]: Deploying cephadm binary to compute-1
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:09 compute-0 ceph-mon[73551]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:10 compute-0 ceph-mgr[73845]: [cephadm INFO root] Added host compute-1
Oct 10 09:44:10 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Added host compute-1
Oct 10 09:44:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:11 compute-0 ceph-mon[73551]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:11 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:11 compute-0 ceph-mon[73551]: Added host compute-1
Oct 10 09:44:11 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:12 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct 10 09:44:12 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct 10 09:44:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:12 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:13 compute-0 ceph-mon[73551]: Deploying cephadm binary to compute-2
Oct 10 09:44:13 compute-0 ceph-mon[73551]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:13 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:15 compute-0 ceph-mon[73551]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 10 09:44:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Added host compute-2
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Added host compute-2
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:44:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:44:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct 10 09:44:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Added host 'compute-0' with addr '192.168.122.100'
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Added host 'compute-1' with addr '192.168.122.101'
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Added host 'compute-2' with addr '192.168.122.102'
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Scheduled mon update...
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Scheduled mgr update...
Oct 10 09:44:16 compute-0 admiring_wright[79744]: Scheduled osd.default_drive_group update...
Oct 10 09:44:16 compute-0 systemd[1]: libpod-ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1.scope: Deactivated successfully.
Oct 10 09:44:16 compute-0 podman[79729]: 2025-10-10 09:44:16.31469453 +0000 UTC m=+12.702178691 container died ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c49a40596b81134965f19e8b2f581168d76a0459752c4274abd28e54093d51c-merged.mount: Deactivated successfully.
Oct 10 09:44:16 compute-0 podman[79729]: 2025-10-10 09:44:16.360985847 +0000 UTC m=+12.748469998 container remove ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1 (image=quay.io/ceph/ceph:v19, name=admiring_wright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:16 compute-0 systemd[1]: libpod-conmon-ced51fc19092778a47df6422f718c927a76f08378120987dedfb39242950c3f1.scope: Deactivated successfully.
Oct 10 09:44:16 compute-0 sudo[79726]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:16 compute-0 sudo[79899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhbmzhelmlcynqmlbblmnokjlsqmbkd ; /usr/bin/python3'
Oct 10 09:44:16 compute-0 sudo[79899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:44:16 compute-0 python3[79901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:44:16 compute-0 podman[79903]: 2025-10-10 09:44:16.877069773 +0000 UTC m=+0.062370168 container create 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:16 compute-0 systemd[1]: Started libpod-conmon-84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8.scope.
Oct 10 09:44:16 compute-0 podman[79903]: 2025-10-10 09:44:16.849846095 +0000 UTC m=+0.035146490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f004acba5163e56530e291587fc12c0bc9cb031095abe4582964ec0e1d3a0ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f004acba5163e56530e291587fc12c0bc9cb031095abe4582964ec0e1d3a0ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f004acba5163e56530e291587fc12c0bc9cb031095abe4582964ec0e1d3a0ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:16 compute-0 podman[79903]: 2025-10-10 09:44:16.972921382 +0000 UTC m=+0.158221837 container init 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:16 compute-0 podman[79903]: 2025-10-10 09:44:16.984403973 +0000 UTC m=+0.169704338 container start 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:44:16 compute-0 podman[79903]: 2025-10-10 09:44:16.98842713 +0000 UTC m=+0.173727525 container attach 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:44:17 compute-0 ceph-mon[73551]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Added host compute-2
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Marking host: compute-1 for OSDSpec preview refresh.
Oct 10 09:44:17 compute-0 ceph-mon[73551]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 10 09:44:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 10 09:44:17 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793260473' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:44:17 compute-0 interesting_torvalds[79920]: 
Oct 10 09:44:17 compute-0 interesting_torvalds[79920]: {"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":59,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-10T09:43:15:731413+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-10T09:43:15.734386+0000","services":{}},"progress_events":{}}
Oct 10 09:44:17 compute-0 podman[79903]: 2025-10-10 09:44:17.447642057 +0000 UTC m=+0.632942422 container died 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:17 compute-0 systemd[1]: libpod-84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8.scope: Deactivated successfully.
Oct 10 09:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f004acba5163e56530e291587fc12c0bc9cb031095abe4582964ec0e1d3a0ad-merged.mount: Deactivated successfully.
Oct 10 09:44:17 compute-0 podman[79903]: 2025-10-10 09:44:17.48553097 +0000 UTC m=+0.670831375 container remove 84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8 (image=quay.io/ceph/ceph:v19, name=interesting_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:44:17 compute-0 systemd[1]: libpod-conmon-84d04522bec11022c734d5489a25548acf8a0929dfdcd4768775541f6c56bad8.scope: Deactivated successfully.
Oct 10 09:44:17 compute-0 sudo[79899]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3793260473' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:44:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:19 compute-0 ceph-mon[73551]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:21 compute-0 ceph-mon[73551]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:23 compute-0 ceph-mon[73551]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:25 compute-0 ceph-mon[73551]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:27 compute-0 ceph-mon[73551]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:29 compute-0 ceph-mon[73551]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:31 compute-0 ceph-mon[73551]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:44:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:44:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:44:33 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:44:33 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:44:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:33 compute-0 ceph-mon[73551]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:44:33 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:44:33 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:44:33 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:44:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:34 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:44:34 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:44:34 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:44:34 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:44:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:44:35.344+0000 7ff1b6f19640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: service_name: mon
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: placement:
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   hosts:
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-0
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-1
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-2
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:44:35.347+0000 7ff1b6f19640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: service_name: mgr
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: placement:
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   hosts:
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-0
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-1
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   - compute-2
Oct 10 09:44:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev de07891c-8491-4950-bd1d-69f8a6e33538 (Updating crash deployment (+1 -> 2))
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:44:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct 10 09:44:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct 10 09:44:35 compute-0 ceph-mon[73551]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:35 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:44:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 10 09:44:36 compute-0 ceph-mon[73551]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 09:44:36 compute-0 ceph-mon[73551]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:36 compute-0 ceph-mon[73551]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 09:44:36 compute-0 ceph-mon[73551]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:36 compute-0 ceph-mon[73551]: Deploying daemon crash.compute-1 on compute-1
Oct 10 09:44:36 compute-0 ceph-mon[73551]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 10 09:44:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev de07891c-8491-4950-bd1d-69f8a6e33538 (Updating crash deployment (+1 -> 2))
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event de07891c-8491-4950-bd1d-69f8a6e33538 (Updating crash deployment (+1 -> 2)) in 3 seconds
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:44:38
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [balancer INFO root] No pools available
Oct 10 09:44:38 compute-0 sudo[79957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:44:38 compute-0 sudo[79957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:38 compute-0 sudo[79957]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 2 completed events
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:44:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:44:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:44:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:38 compute-0 sudo[79982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:44:38 compute-0 sudo[79982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:38 compute-0 ceph-mon[73551]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.799966635 +0000 UTC m=+0.066854010 container create bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:38 compute-0 systemd[1]: Started libpod-conmon-bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157.scope.
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.770545162 +0000 UTC m=+0.037432617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.890380398 +0000 UTC m=+0.157267803 container init bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.902054316 +0000 UTC m=+0.168941681 container start bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.905788673 +0000 UTC m=+0.172676128 container attach bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:44:38 compute-0 pensive_archimedes[80062]: 167 167
Oct 10 09:44:38 compute-0 systemd[1]: libpod-bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157.scope: Deactivated successfully.
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.911842809 +0000 UTC m=+0.178730174 container died bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe8c1ff95abb184787c2e6b9167f95fa120cb2c03c8e88917ee217982ed9af9e-merged.mount: Deactivated successfully.
Oct 10 09:44:38 compute-0 podman[80046]: 2025-10-10 09:44:38.96317222 +0000 UTC m=+0.230059585 container remove bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:38 compute-0 systemd[1]: libpod-conmon-bb26e9c44b6ed94beb9ccb6f1d9b9980e2d2f973ce9bf626f241543814ce0157.scope: Deactivated successfully.
Oct 10 09:44:39 compute-0 podman[80084]: 2025-10-10 09:44:39.115790593 +0000 UTC m=+0.043285446 container create 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:44:39 compute-0 systemd[1]: Started libpod-conmon-331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be.scope.
Oct 10 09:44:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-0 podman[80084]: 2025-10-10 09:44:39.095581495 +0000 UTC m=+0.023076368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:39 compute-0 podman[80084]: 2025-10-10 09:44:39.19809849 +0000 UTC m=+0.125593363 container init 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:44:39 compute-0 podman[80084]: 2025-10-10 09:44:39.205615146 +0000 UTC m=+0.133109999 container start 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:44:39 compute-0 podman[80084]: 2025-10-10 09:44:39.210150382 +0000 UTC m=+0.137645245 container attach 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 09:44:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:39 compute-0 boring_shamir[80101]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:44:39 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:39 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:39 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c307f4a4-39e7-4a9c-9d19-a2b8712089ab
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]': finished
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 10 09:44:40 compute-0 lvm[80163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:40 compute-0 lvm[80163]: VG ceph_vg0 finished
Oct 10 09:44:40 compute-0 ceph-mon[73551]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]': finished
Oct 10 09:44:40 compute-0 ceph-mon[73551]: osdmap e4: 1 total, 0 up, 1 in
Oct 10 09:44:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]': finished
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:40 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:40 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 10 09:44:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176337060' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:44:40 compute-0 boring_shamir[80101]:  stderr: got monmap epoch 1
Oct 10 09:44:40 compute-0 boring_shamir[80101]: --> Creating keyring file for osd.0
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 10 09:44:40 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c307f4a4-39e7-4a9c-9d19-a2b8712089ab --setuser ceph --setgroup ceph
Oct 10 09:44:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 10 09:44:41 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1441666751' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]': finished
Oct 10 09:44:41 compute-0 ceph-mon[73551]: osdmap e5: 2 total, 0 up, 2 in
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2176337060' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1441666751' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:44:41 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 09:44:42 compute-0 ceph-mon[73551]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:42 compute-0 ceph-mon[73551]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 09:44:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:43 compute-0 boring_shamir[80101]:  stderr: 2025-10-10T09:44:40.851+0000 7f459521d740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Oct 10 09:44:43 compute-0 boring_shamir[80101]:  stderr: 2025-10-10T09:44:41.113+0000 7f459521d740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 10 09:44:43 compute-0 boring_shamir[80101]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:43 compute-0 boring_shamir[80101]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:43 compute-0 boring_shamir[80101]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 10 09:44:43 compute-0 boring_shamir[80101]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 10 09:44:43 compute-0 systemd[1]: libpod-331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be.scope: Deactivated successfully.
Oct 10 09:44:43 compute-0 podman[80084]: 2025-10-10 09:44:43.845253708 +0000 UTC m=+4.772748591 container died 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:44:43 compute-0 systemd[1]: libpod-331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be.scope: Consumed 2.266s CPU time.
Oct 10 09:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-188597f42f206ba3c9a83f11bbefc68954caa4ceb07ac4a3e7d6933ff1dfbe4c-merged.mount: Deactivated successfully.
Oct 10 09:44:43 compute-0 podman[80084]: 2025-10-10 09:44:43.901144803 +0000 UTC m=+4.828639656 container remove 331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shamir, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:43 compute-0 systemd[1]: libpod-conmon-331945509e92ce00c766dc8aa639eb084c50fed2f75ac03aebbd2c252ee5c7be.scope: Deactivated successfully.
Oct 10 09:44:43 compute-0 sudo[79982]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:44 compute-0 sudo[81100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:44 compute-0 sudo[81100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:44 compute-0 sudo[81100]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:44 compute-0 sudo[81125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:44:44 compute-0 sudo[81125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:44 compute-0 ceph-mon[73551]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.641512465 +0000 UTC m=+0.066494555 container create 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:44:44 compute-0 systemd[1]: Started libpod-conmon-22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b.scope.
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.614979819 +0000 UTC m=+0.039961949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.736886652 +0000 UTC m=+0.161868742 container init 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.75047349 +0000 UTC m=+0.175455570 container start 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.754600709 +0000 UTC m=+0.179582839 container attach 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:44 compute-0 wizardly_kowalevski[81204]: 167 167
Oct 10 09:44:44 compute-0 systemd[1]: libpod-22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b.scope: Deactivated successfully.
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.759664399 +0000 UTC m=+0.184646489 container died 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f0713709945a9e6c6495db4da6d68cfecf8442880465aaa4f6afddc67cd6bf-merged.mount: Deactivated successfully.
Oct 10 09:44:44 compute-0 podman[81187]: 2025-10-10 09:44:44.819693034 +0000 UTC m=+0.244675144 container remove 22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kowalevski, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:44 compute-0 systemd[1]: libpod-conmon-22815e7e7de94f484288617e7735b75023f25800c8fb418e5cb480213f02338b.scope: Deactivated successfully.
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.015470267 +0000 UTC m=+0.051642773 container create 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:45 compute-0 systemd[1]: Started libpod-conmon-07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0.scope.
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:44.99242361 +0000 UTC m=+0.028596096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:45 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1d586372775b45f31d3183bde2b8b8b4ef711bfcb5e42030784425cf044fdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1d586372775b45f31d3183bde2b8b8b4ef711bfcb5e42030784425cf044fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1d586372775b45f31d3183bde2b8b8b4ef711bfcb5e42030784425cf044fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1d586372775b45f31d3183bde2b8b8b4ef711bfcb5e42030784425cf044fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.118270384 +0000 UTC m=+0.154442870 container init 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.130230378 +0000 UTC m=+0.166402874 container start 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.134913966 +0000 UTC m=+0.171086452 container attach 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:44:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:45 compute-0 exciting_benz[81246]: {
Oct 10 09:44:45 compute-0 exciting_benz[81246]:     "0": [
Oct 10 09:44:45 compute-0 exciting_benz[81246]:         {
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "devices": [
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "/dev/loop3"
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             ],
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "lv_name": "ceph_lv0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "lv_size": "21470642176",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "name": "ceph_lv0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "tags": {
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.cluster_name": "ceph",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.crush_device_class": "",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.encrypted": "0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.osd_id": "0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.type": "block",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.vdo": "0",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:                 "ceph.with_tpm": "0"
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             },
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "type": "block",
Oct 10 09:44:45 compute-0 exciting_benz[81246]:             "vg_name": "ceph_vg0"
Oct 10 09:44:45 compute-0 exciting_benz[81246]:         }
Oct 10 09:44:45 compute-0 exciting_benz[81246]:     ]
Oct 10 09:44:45 compute-0 exciting_benz[81246]: }
Oct 10 09:44:45 compute-0 systemd[1]: libpod-07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0.scope: Deactivated successfully.
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.459729381 +0000 UTC m=+0.495901867 container died 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1d586372775b45f31d3183bde2b8b8b4ef711bfcb5e42030784425cf044fdb-merged.mount: Deactivated successfully.
Oct 10 09:44:45 compute-0 podman[81229]: 2025-10-10 09:44:45.524371922 +0000 UTC m=+0.560544418 container remove 07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_benz, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:45 compute-0 systemd[1]: libpod-conmon-07590f4e923dee1f74e02f4ba40c260bb795d7fb262df53f88ae2b6fd7e6a9f0.scope: Deactivated successfully.
Oct 10 09:44:45 compute-0 sudo[81125]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 10 09:44:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:44:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:45 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 10 09:44:45 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 10 09:44:45 compute-0 sudo[81270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:45 compute-0 sudo[81270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:45 compute-0 sudo[81270]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:45 compute-0 sudo[81295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:45 compute-0 sudo[81295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.169927016 +0000 UTC m=+0.044893285 container create fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 09:44:46 compute-0 systemd[1]: Started libpod-conmon-fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc.scope.
Oct 10 09:44:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.148765212 +0000 UTC m=+0.023731461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.246394435 +0000 UTC m=+0.121360684 container init fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.254536199 +0000 UTC m=+0.129502428 container start fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.257926244 +0000 UTC m=+0.132892493 container attach fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:44:46 compute-0 goofy_shaw[81378]: 167 167
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.262584202 +0000 UTC m=+0.137550431 container died fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 09:44:46 compute-0 systemd[1]: libpod-fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc.scope: Deactivated successfully.
Oct 10 09:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c92a8aec6db7ec41da5ca95fc54d1fa946d2f634c77da922d044d432656dee81-merged.mount: Deactivated successfully.
Oct 10 09:44:46 compute-0 podman[81362]: 2025-10-10 09:44:46.300000173 +0000 UTC m=+0.174966402 container remove fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:46 compute-0 systemd[1]: libpod-conmon-fb57da5afe0e6bc06d00bfa63a2663cd6647fd1268b0c7e47e59b2666ead86dc.scope: Deactivated successfully.
Oct 10 09:44:46 compute-0 ceph-mon[73551]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:46 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:44:46 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:46 compute-0 ceph-mon[73551]: Deploying daemon osd.0 on compute-0
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.623777224 +0000 UTC m=+0.062842891 container create 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:44:46 compute-0 systemd[1]: Started libpod-conmon-8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792.scope.
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.597136095 +0000 UTC m=+0.036201862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.738780722 +0000 UTC m=+0.177846419 container init 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.755975442 +0000 UTC m=+0.195041149 container start 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.760200335 +0000 UTC m=+0.199266032 container attach 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 10 09:44:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:44:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:44:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:46 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Oct 10 09:44:46 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Oct 10 09:44:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test[81425]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 10 09:44:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test[81425]:                             [--no-systemd] [--no-tmpfs]
Oct 10 09:44:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test[81425]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 09:44:46 compute-0 systemd[1]: libpod-8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792.scope: Deactivated successfully.
Oct 10 09:44:46 compute-0 podman[81409]: 2025-10-10 09:44:46.968145339 +0000 UTC m=+0.407211096 container died 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd9b5af9715831ae16018cef78b713bc278578036f5095bf716d72206befdf1-merged.mount: Deactivated successfully.
Oct 10 09:44:47 compute-0 podman[81409]: 2025-10-10 09:44:47.003116778 +0000 UTC m=+0.442182445 container remove 8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:44:47 compute-0 systemd[1]: libpod-conmon-8f345dfe6dee9d8706d6214aa4d0a0092973c16fb6a73f04ed11af898823e792.scope: Deactivated successfully.
Oct 10 09:44:47 compute-0 systemd[1]: Reloading.
Oct 10 09:44:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:47 compute-0 systemd-rc-local-generator[81492]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:47 compute-0 systemd-sysv-generator[81496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:44:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:44:47 compute-0 ceph-mon[73551]: Deploying daemon osd.1 on compute-1
Oct 10 09:44:47 compute-0 systemd[1]: Reloading.
Oct 10 09:44:47 compute-0 systemd-sysv-generator[81559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:47 compute-0 systemd-rc-local-generator[81554]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:47 compute-0 sudo[81527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgewofmerodhphdqedyrjxubisomadc ; /usr/bin/python3'
Oct 10 09:44:47 compute-0 sudo[81527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:44:47 compute-0 systemd[1]: Starting Ceph osd.0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:44:47 compute-0 python3[81565]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.05864611 +0000 UTC m=+0.055969139 container create abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:44:48 compute-0 systemd[1]: Started libpod-conmon-abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a.scope.
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.029802257 +0000 UTC m=+0.027125366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:44:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457b6bcf55dd5c523814f49978640e7a0084144456a573155eed65a07037d338/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457b6bcf55dd5c523814f49978640e7a0084144456a573155eed65a07037d338/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457b6bcf55dd5c523814f49978640e7a0084144456a573155eed65a07037d338/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 podman[81629]: 2025-10-10 09:44:48.159834783 +0000 UTC m=+0.066327868 container create 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.168181555 +0000 UTC m=+0.165504654 container init abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.181493383 +0000 UTC m=+0.178816442 container start abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.185852361 +0000 UTC m=+0.183175420 container attach abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:48 compute-0 podman[81629]: 2025-10-10 09:44:48.129825011 +0000 UTC m=+0.036318096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-0 podman[81629]: 2025-10-10 09:44:48.283030268 +0000 UTC m=+0.189523363 container init 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:44:48 compute-0 podman[81629]: 2025-10-10 09:44:48.295364514 +0000 UTC m=+0.201857639 container start 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:48 compute-0 podman[81629]: 2025-10-10 09:44:48.29966934 +0000 UTC m=+0.206162445 container attach 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:44:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:48 compute-0 bash[81629]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:48 compute-0 ceph-mon[73551]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:48 compute-0 bash[81629]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 10 09:44:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192005781' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:44:48 compute-0 elegant_easley[81637]: 
Oct 10 09:44:48 compute-0 elegant_easley[81637]: {"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":90,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1760089480,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-10T09:43:15:731413+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-10T09:44:39.351574+0000","services":{}},"progress_events":{}}
Oct 10 09:44:48 compute-0 systemd[1]: libpod-abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a.scope: Deactivated successfully.
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.667986172 +0000 UTC m=+0.665309191 container died abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-457b6bcf55dd5c523814f49978640e7a0084144456a573155eed65a07037d338-merged.mount: Deactivated successfully.
Oct 10 09:44:48 compute-0 podman[81603]: 2025-10-10 09:44:48.708328953 +0000 UTC m=+0.705651962 container remove abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a (image=quay.io/ceph/ceph:v19, name=elegant_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:44:48 compute-0 systemd[1]: libpod-conmon-abee4c7fdda9940340e4a1afbafb84afedb9a30d29961114e3363770c6011d7a.scope: Deactivated successfully.
Oct 10 09:44:48 compute-0 sudo[81527]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:49 compute-0 lvm[81763]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:49 compute-0 lvm[81763]: VG ceph_vg0 finished
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:49 compute-0 bash[81629]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 10 09:44:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/192005781' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:49 compute-0 bash[81629]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 09:44:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate[81650]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 10 09:44:49 compute-0 bash[81629]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 10 09:44:49 compute-0 systemd[1]: libpod-8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6.scope: Deactivated successfully.
Oct 10 09:44:49 compute-0 podman[81629]: 2025-10-10 09:44:49.655995766 +0000 UTC m=+1.562488841 container died 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:44:49 compute-0 systemd[1]: libpod-8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6.scope: Consumed 1.618s CPU time.
Oct 10 09:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-caed009537178b467a4ef5f7f32ff3755975c9f43b36187cd00a62de83637cc8-merged.mount: Deactivated successfully.
Oct 10 09:44:49 compute-0 podman[81629]: 2025-10-10 09:44:49.694139353 +0000 UTC m=+1.600632438 container remove 8fc0bdd2328271ad90588fe1aa2536ac5250da464e5530c38e125733d432fcb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:49 compute-0 podman[81922]: 2025-10-10 09:44:49.895146463 +0000 UTC m=+0.044182191 container create 202b142a1e8ea741cf398d0ec65626157fe2a98d6631a279bde97a1203b0aacd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656450f4333f8bf6546a5112841c293f9f9e6dffdb16ee2944c2c59cfe25ee81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656450f4333f8bf6546a5112841c293f9f9e6dffdb16ee2944c2c59cfe25ee81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656450f4333f8bf6546a5112841c293f9f9e6dffdb16ee2944c2c59cfe25ee81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656450f4333f8bf6546a5112841c293f9f9e6dffdb16ee2944c2c59cfe25ee81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656450f4333f8bf6546a5112841c293f9f9e6dffdb16ee2944c2c59cfe25ee81/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-0 podman[81922]: 2025-10-10 09:44:49.874196117 +0000 UTC m=+0.023231875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:49 compute-0 podman[81922]: 2025-10-10 09:44:49.984695373 +0000 UTC m=+0.133731141 container init 202b142a1e8ea741cf398d0ec65626157fe2a98d6631a279bde97a1203b0aacd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:44:50 compute-0 podman[81922]: 2025-10-10 09:44:50.006610712 +0000 UTC m=+0.155646440 container start 202b142a1e8ea741cf398d0ec65626157fe2a98d6631a279bde97a1203b0aacd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:44:50 compute-0 bash[81922]: 202b142a1e8ea741cf398d0ec65626157fe2a98d6631a279bde97a1203b0aacd
Oct 10 09:44:50 compute-0 systemd[1]: Started Ceph osd.0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:44:50 compute-0 ceph-osd[81941]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:44:50 compute-0 ceph-osd[81941]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 10 09:44:50 compute-0 ceph-osd[81941]: pidfile_write: ignore empty --pid-file
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:50 compute-0 sudo[81295]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:50 compute-0 sudo[81953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:50 compute-0 sudo[81953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:50 compute-0 sudo[81953]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:50 compute-0 sudo[81978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:44:50 compute-0 sudo[81978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.697755014 +0000 UTC m=+0.054209979 container create c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:50 compute-0 systemd[1]: Started libpod-conmon-c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08.scope.
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.6756903 +0000 UTC m=+0.032145285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.806276065 +0000 UTC m=+0.162731130 container init c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.815480524 +0000 UTC m=+0.171935489 container start c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.81978097 +0000 UTC m=+0.176236025 container attach c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 09:44:50 compute-0 dreamy_hugle[82068]: 167 167
Oct 10 09:44:50 compute-0 systemd[1]: libpod-c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08.scope: Deactivated successfully.
Oct 10 09:44:50 compute-0 conmon[82068]: conmon c2a1d3fec9192c7a6613 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08.scope/container/memory.events
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.825095219 +0000 UTC m=+0.181550184 container died c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae42811f5e15dee26dd011db756003822690f35f47dde35bd30e9de5f5db5071-merged.mount: Deactivated successfully.
Oct 10 09:44:50 compute-0 podman[82050]: 2025-10-10 09:44:50.866987712 +0000 UTC m=+0.223442677 container remove c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hugle, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:44:50 compute-0 systemd[1]: libpod-conmon-c2a1d3fec9192c7a661392136b003874cf1310449f1fd03d9f8c1ffde112fc08.scope: Deactivated successfully.
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccafc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccafc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccafc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 09:44:50 compute-0 ceph-osd[81941]: bdev(0x562a8ccafc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.071383286 +0000 UTC m=+0.065917325 container create 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 09:44:51 compute-0 ceph-mon[73551]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:51 compute-0 systemd[1]: Started libpod-conmon-7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4.scope.
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.037295706 +0000 UTC m=+0.031829795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4c0722715b5b44b024513253563713d08a6e98698f8880aeb43a7bb4cff7ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4c0722715b5b44b024513253563713d08a6e98698f8880aeb43a7bb4cff7ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4c0722715b5b44b024513253563713d08a6e98698f8880aeb43a7bb4cff7ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4c0722715b5b44b024513253563713d08a6e98698f8880aeb43a7bb4cff7ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.165572173 +0000 UTC m=+0.160106212 container init 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.183164256 +0000 UTC m=+0.177698275 container start 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.187238253 +0000 UTC m=+0.181772292 container attach 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8ccaf800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:51 compute-0 ceph-osd[81941]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 10 09:44:51 compute-0 ceph-osd[81941]: load: jerasure load: lrc 
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:51 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:51 compute-0 lvm[82193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:51 compute-0 lvm[82193]: VG ceph_vg0 finished
Oct 10 09:44:51 compute-0 lvm[82194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:51 compute-0 lvm[82194]: VG ceph_vg0 finished
Oct 10 09:44:51 compute-0 bold_colden[82110]: {}
Oct 10 09:44:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:51 compute-0 systemd[1]: libpod-7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4.scope: Deactivated successfully.
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.912091053 +0000 UTC m=+0.906625062 container died 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:51 compute-0 systemd[1]: libpod-7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4.scope: Consumed 1.131s CPU time.
Oct 10 09:44:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4c0722715b5b44b024513253563713d08a6e98698f8880aeb43a7bb4cff7ed-merged.mount: Deactivated successfully.
Oct 10 09:44:51 compute-0 podman[82094]: 2025-10-10 09:44:51.961803879 +0000 UTC m=+0.956337918 container remove 7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:51 compute-0 systemd[1]: libpod-conmon-7e9f1d9c9d413ae4abc71006adefab3d900205eda35692124a77f147edb6b7a4.scope: Deactivated successfully.
Oct 10 09:44:52 compute-0 ceph-osd[81941]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 09:44:52 compute-0 ceph-osd[81941]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:52 compute-0 sudo[81978]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db46c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs mount shared_bdev_used = 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Git sha 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: DB SUMMARY
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: DB Session ID:  FU7RT5M235238DYT0J8I
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                                     Options.env: 0x562a8db1bea0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                                Options.info_log: 0x562a8db1f800
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.write_buffer_manager: 0x562a8dc10a00
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.row_cache: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                              Options.wal_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.wal_compression: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.max_background_jobs: 4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Compression algorithms supported:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kZSTD supported: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fbe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3569403c-425c-453a-a864-5b06fbf810ed
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089492869060, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089492869272, "job": 1, "event": "recovery_finished"}
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 10 09:44:52 compute-0 ceph-osd[81941]: freelist init
Oct 10 09:44:52 compute-0 ceph-osd[81941]: freelist _read_cfg
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 09:44:52 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bluefs umount
Oct 10 09:44:52 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 09:44:52 compute-0 ceph-mon[73551]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bdev(0x562a8db47000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluefs mount shared_bdev_used = 4718592
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Git sha 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: DB SUMMARY
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: DB Session ID:  FU7RT5M235238DYT0J8J
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                                     Options.env: 0x562a8dcb6310
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                                Options.info_log: 0x562a8db1f9a0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.write_buffer_manager: 0x562a8dc10a00
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.row_cache: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                              Options.wal_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.wal_compression: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.max_background_jobs: 4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Compression algorithms supported:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kZSTD supported: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1f6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd45350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562a8db1fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562a8cd449b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3569403c-425c-453a-a864-5b06fbf810ed
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089493122873, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089493300543, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3569403c-425c-453a-a864-5b06fbf810ed", "db_session_id": "FU7RT5M235238DYT0J8J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089493372679, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3569403c-425c-453a-a864-5b06fbf810ed", "db_session_id": "FU7RT5M235238DYT0J8J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089493380102, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3569403c-425c-453a-a864-5b06fbf810ed", "db_session_id": "FU7RT5M235238DYT0J8J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089493382156, "job": 1, "event": "recovery_finished"}
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562a8dce4000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: DB pointer 0x562a8dcc4000
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 10 09:44:53 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:44:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 09:44:53 compute-0 ceph-osd[81941]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 09:44:53 compute-0 ceph-osd[81941]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 09:44:53 compute-0 ceph-osd[81941]: _get_class not permitted to load lua
Oct 10 09:44:53 compute-0 ceph-osd[81941]: _get_class not permitted to load sdk
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 load_pgs
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 load_pgs opened 0 pgs
Oct 10 09:44:53 compute-0 ceph-osd[81941]: osd.0 0 log_to_monitors true
Oct 10 09:44:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0[81937]: 2025-10-10T09:44:53.428+0000 7fcf084aa740 -1 osd.0 0 log_to_monitors true
Oct 10 09:44:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct 10 09:44:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 09:44:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 10 09:44:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:54 compute-0 ceph-mon[73551]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:54 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:54 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:54 compute-0 sudo[82624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:44:54 compute-0 sudo[82624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-0 sudo[82624]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 09:44:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 09:44:54 compute-0 sudo[82649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:54 compute-0 sudo[82649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-0 sudo[82649]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:54 compute-0 sudo[82674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:44:54 compute-0 sudo[82674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:44:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 done with init, starting boot process
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 start_boot
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 09:44:55 compute-0 ceph-osd[81941]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:55 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:55 compute-0 ceph-mon[73551]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 09:44:55 compute-0 ceph-mon[73551]: osdmap e6: 2 total, 0 up, 2 in
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2298200206; not ready for session (expect reconnect)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:55 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 09:44:55 compute-0 podman[82768]: 2025-10-10 09:44:55.34054307 +0000 UTC m=+0.090934699 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 podman[82768]: 2025-10-10 09:44:55.456453 +0000 UTC m=+0.206844649 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 sudo[82674]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:55 compute-0 sudo[82855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:55 compute-0 sudo[82855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:55 compute-0 sudo[82855]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:55 compute-0 sudo[82880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:44:55 compute-0 sudo[82880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:56 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2298200206; not ready for session (expect reconnect)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 09:44:56 compute-0 ceph-mon[73551]: osdmap e7: 2 total, 0 up, 2 in
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:56 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:56 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:56 compute-0 sudo[82880]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:44:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:56 compute-0 sudo[82935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:56 compute-0 sudo[82935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:56 compute-0 sudo[82935]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:56 compute-0 sudo[82960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 09:44:56 compute-0 sudo[82960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2298200206; not ready for session (expect reconnect)
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:57 compute-0 ceph-mon[73551]: purged_snaps scrub starts
Oct 10 09:44:57 compute-0 ceph-mon[73551]: purged_snaps scrub ok
Oct 10 09:44:57 compute-0 ceph-mon[73551]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 09:44:57 compute-0 ceph-mon[73551]: osdmap e8: 2 total, 0 up, 2 in
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2840395396; not ready for session (expect reconnect)
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.102058364 +0000 UTC m=+0.052005066 container create 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:44:57 compute-0 systemd[1]: Started libpod-conmon-12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b.scope.
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.075924672 +0000 UTC m=+0.025871454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.223185949 +0000 UTC m=+0.173132671 container init 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.230792346 +0000 UTC m=+0.180739088 container start 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:57 compute-0 sad_wright[83040]: 167 167
Oct 10 09:44:57 compute-0 systemd[1]: libpod-12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b.scope: Deactivated successfully.
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.244616672 +0000 UTC m=+0.194563454 container attach 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.246024519 +0000 UTC m=+0.195971221 container died 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 09:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d80d46093d3c3d6ec257045b0a16259b3d3559e54530bc3a5389940ed7eb13-merged.mount: Deactivated successfully.
Oct 10 09:44:57 compute-0 podman[83023]: 2025-10-10 09:44:57.300393353 +0000 UTC m=+0.250340055 container remove 12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_wright, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:44:57 compute-0 systemd[1]: libpod-conmon-12b71b84e94a4140f05ba4ccece09c4f9254da79cb1f521cc601f259634f304b.scope: Deactivated successfully.
Oct 10 09:44:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:57 compute-0 podman[83065]: 2025-10-10 09:44:57.506568627 +0000 UTC m=+0.055661368 container create 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:57 compute-0 systemd[1]: Started libpod-conmon-7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d.scope.
Oct 10 09:44:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac3a81924504e0ff9fe5f2345c937b276e027f6c52b75fd4ae161a99de28122/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac3a81924504e0ff9fe5f2345c937b276e027f6c52b75fd4ae161a99de28122/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac3a81924504e0ff9fe5f2345c937b276e027f6c52b75fd4ae161a99de28122/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac3a81924504e0ff9fe5f2345c937b276e027f6c52b75fd4ae161a99de28122/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:57 compute-0 podman[83065]: 2025-10-10 09:44:57.480377914 +0000 UTC m=+0.029470665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:57 compute-0 podman[83065]: 2025-10-10 09:44:57.590407225 +0000 UTC m=+0.139499996 container init 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:57 compute-0 podman[83065]: 2025-10-10 09:44:57.599378647 +0000 UTC m=+0.148471388 container start 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:57 compute-0 podman[83065]: 2025-10-10 09:44:57.606962603 +0000 UTC m=+0.156055364 container attach 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.958 iops: 8693.274 elapsed_sec: 0.345
Oct 10 09:44:57 compute-0 ceph-osd[81941]: log_channel(cluster) log [WRN] : OSD bench result of 8693.274022 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:44:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0[81937]: 2025-10-10T09:44:57.870+0000 7fcf04c40640 -1 osd.0 0 waiting for initial osdmap
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 0 waiting for initial osdmap
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 check_osdmap_features require_osd_release unknown -> squid
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 set_numa_affinity not setting numa affinity
Oct 10 09:44:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-0[81937]: 2025-10-10T09:44:57.896+0000 7fceffa55640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 09:44:57 compute-0 ceph-osd[81941]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2298200206; not ready for session (expect reconnect)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:58 compute-0 ceph-mon[73551]: purged_snaps scrub starts
Oct 10 09:44:58 compute-0 ceph-mon[73551]: purged_snaps scrub ok
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 10 09:44:58 compute-0 ceph-mon[73551]: osdmap e9: 2 total, 0 up, 2 in
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 10 09:44:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2840395396; not ready for session (expect reconnect)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206] boot
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:58 compute-0 ceph-osd[81941]: osd.0 10 state: booting -> active
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]: [
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:     {
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "available": false,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "being_replaced": false,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "ceph_device_lvm": false,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "lsm_data": {},
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "lvs": [],
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "path": "/dev/sr0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "rejected_reasons": [
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "Has a FileSystem",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "Insufficient space (<5GB)"
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         ],
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         "sys_api": {
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "actuators": null,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "device_nodes": [
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:                 "sr0"
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             ],
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "devname": "sr0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "human_readable_size": "482.00 KB",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "id_bus": "ata",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "model": "QEMU DVD-ROM",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "nr_requests": "2",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "parent": "/dev/sr0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "partitions": {},
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "path": "/dev/sr0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "removable": "1",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "rev": "2.5+",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "ro": "0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "rotational": "0",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "sas_address": "",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "sas_device_handle": "",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "scheduler_mode": "mq-deadline",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "sectors": 0,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "sectorsize": "2048",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "size": 493568.0,
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "support_discard": "2048",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "type": "disk",
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:             "vendor": "QEMU"
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:         }
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]:     }
Oct 10 09:44:58 compute-0 relaxed_ishizaka[83082]: ]
Oct 10 09:44:58 compute-0 systemd[1]: libpod-7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d.scope: Deactivated successfully.
Oct 10 09:44:58 compute-0 podman[83065]: 2025-10-10 09:44:58.237227351 +0000 UTC m=+0.786320092 container died 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac3a81924504e0ff9fe5f2345c937b276e027f6c52b75fd4ae161a99de28122-merged.mount: Deactivated successfully.
Oct 10 09:44:58 compute-0 podman[83065]: 2025-10-10 09:44:58.282794468 +0000 UTC m=+0.831887209 container remove 7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:58 compute-0 systemd[1]: libpod-conmon-7f4950d48e195153a890f4936c80a016743dc54502a8232670d5a0a68a1ee89d.scope: Deactivated successfully.
Oct 10 09:44:58 compute-0 sudo[82960]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: [devicehealth INFO root] creating mgr pool
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 10 09:44:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:44:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:44:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:44:59 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2840395396; not ready for session (expect reconnect)
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:59 compute-0 ceph-mon[73551]: OSD bench result of 8693.274022 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mon[73551]: osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206] boot
Oct 10 09:44:59 compute-0 ceph-mon[73551]: osdmap e10: 2 total, 1 up, 2 in
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:44:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:44:59 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:44:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:44:59 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:44:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:44:59 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:44:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct 10 09:44:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:44:59 compute-0 ceph-osd[81941]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 09:44:59 compute-0 ceph-osd[81941]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 10 09:44:59 compute-0 ceph-osd[81941]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 09:44:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:00 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2840395396; not ready for session (expect reconnect)
Oct 10 09:45:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:45:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:45:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 09:45:00 compute-0 ceph-mon[73551]: osdmap e11: 2 total, 1 up, 2 in
Oct 10 09:45:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mon[73551]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 10 09:45:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 09:45:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Oct 10 09:45:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Oct 10 09:45:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:45:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:00 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:45:01 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2840395396; not ready for session (expect reconnect)
Oct 10 09:45:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:45:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:01 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 09:45:01 compute-0 ceph-mon[73551]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 09:45:01 compute-0 ceph-mon[73551]: osdmap e12: 2 total, 1 up, 2 in
Oct 10 09:45:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:01 compute-0 ceph-mon[73551]: OSD bench result of 2508.856277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 10 09:45:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct 10 09:45:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396] boot
Oct 10 09:45:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct 10 09:45:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:45:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 10 09:45:02 compute-0 ceph-mon[73551]: osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396] boot
Oct 10 09:45:02 compute-0 ceph-mon[73551]: osdmap e13: 2 total, 2 up, 2 in
Oct 10 09:45:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:02 compute-0 ceph-mon[73551]: pgmap v47: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct 10 09:45:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 10 09:45:02 compute-0 ceph-mgr[73845]: [devicehealth INFO root] creating main.db for devicehealth
Oct 10 09:45:02 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 09:45:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 09:45:02 compute-0 sudo[84064]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 10 09:45:02 compute-0 sudo[84064]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 09:45:02 compute-0 sudo[84064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 10 09:45:02 compute-0 sudo[84064]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 09:45:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:45:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:03 compute-0 ceph-mon[73551]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:03 compute-0 ceph-mon[73551]: osdmap e14: 2 total, 2 up, 2 in
Oct 10 09:45:03 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 09:45:03 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 09:45:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 10 09:45:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct 10 09:45:03 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:04 compute-0 ceph-mon[73551]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:04 compute-0 ceph-mon[73551]: pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:05 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.xkdepb(active, since 87s)
Oct 10 09:45:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:06 compute-0 ceph-mon[73551]: mgrmap e9: compute-0.xkdepb(active, since 87s)
Oct 10 09:45:06 compute-0 ceph-mon[73551]: pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:08 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:08 compute-0 ceph-mon[73551]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:10 compute-0 ceph-mon[73551]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:12 compute-0 ceph-mon[73551]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:14 compute-0 ceph-mon[73551]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:45:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:14 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:14 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:15 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:15 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:15 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:15 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:45:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:45:16 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:16 compute-0 ceph-mon[73551]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:16 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 9e55e9a2-9b8b-4dae-afef-3435a3a81644 (Updating mon deployment (+2 -> 3))
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct 10 09:45:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct 10 09:45:17 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:17 compute-0 ceph-mon[73551]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:17 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:17 compute-0 ceph-mon[73551]: Deploying daemon mon.compute-2 on compute-2
Oct 10 09:45:17 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 10 09:45:17 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 09:45:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:18 compute-0 ceph-mon[73551]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 10 09:45:18 compute-0 ceph-mon[73551]: Cluster is now healthy
Oct 10 09:45:18 compute-0 sudo[84090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eejechahpiltlhnhwwjcsmscklrbmmne ; /usr/bin/python3'
Oct 10 09:45:18 compute-0 sudo[84090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:19 compute-0 python3[84092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.120306282 +0000 UTC m=+0.041064116 container create 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:45:19 compute-0 systemd[1]: Started libpod-conmon-354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792.scope.
Oct 10 09:45:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aa1adad6fa94359999473287ce975a326dc049ccab8c264dde065270d211f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aa1adad6fa94359999473287ce975a326dc049ccab8c264dde065270d211f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aa1adad6fa94359999473287ce975a326dc049ccab8c264dde065270d211f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.19736761 +0000 UTC m=+0.118125464 container init 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.103172984 +0000 UTC m=+0.023930838 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.204542033 +0000 UTC m=+0.125299867 container start 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.208814576 +0000 UTC m=+0.129572430 container attach 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167870161' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:45:19 compute-0 adoring_thompson[84110]: 
Oct 10 09:45:19 compute-0 adoring_thompson[84110]: {"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":121,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":15,"num_osds":2,"num_up_osds":2,"osd_up_since":1760089501,"num_in_osds":2,"osd_in_since":1760089480,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894697472,"bytes_avail":42046586880,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-10-10T09:43:15:731413+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-10T09:44:39.351574+0000","services":{}},"progress_events":{}}
Oct 10 09:45:19 compute-0 systemd[1]: libpod-354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792.scope: Deactivated successfully.
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.641916785 +0000 UTC m=+0.562674639 container died 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-236aa1adad6fa94359999473287ce975a326dc049ccab8c264dde065270d211f-merged.mount: Deactivated successfully.
Oct 10 09:45:19 compute-0 podman[84094]: 2025-10-10 09:45:19.68593937 +0000 UTC m=+0.606697204 container remove 354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792 (image=quay.io/ceph/ceph:v19, name=adoring_thompson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:19 compute-0 systemd[1]: libpod-conmon-354017a4d73a86c13943d7dc7576e84e8ed771c751ae85fb138f39bfa834d792.scope: Deactivated successfully.
Oct 10 09:45:19 compute-0 sudo[84090]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct 10 09:45:19 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct 10 09:45:19 compute-0 ceph-mon[73551]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1167870161' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct 10 09:45:19 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:19 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:19 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 10 09:45:19 compute-0 ceph-mon[73551]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct 10 09:45:19 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:20 compute-0 sudo[84171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqzfdlrhagtcljhpdgrohljwupqutlo ; /usr/bin/python3'
Oct 10 09:45:20 compute-0 sudo[84171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:20 compute-0 python3[84173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:20 compute-0 podman[84174]: 2025-10-10 09:45:20.292668654 +0000 UTC m=+0.042940930 container create 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:45:20 compute-0 systemd[1]: Started libpod-conmon-4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c.scope.
Oct 10 09:45:20 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2436d15a0b288c42848fa74df475ff3088871b1732d85340b232c66ee4b33a4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2436d15a0b288c42848fa74df475ff3088871b1732d85340b232c66ee4b33a4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-0 podman[84174]: 2025-10-10 09:45:20.274417149 +0000 UTC m=+0.024689435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:20 compute-0 podman[84174]: 2025-10-10 09:45:20.371156111 +0000 UTC m=+0.121428377 container init 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:45:20 compute-0 podman[84174]: 2025-10-10 09:45:20.376955657 +0000 UTC m=+0.127227923 container start 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:20 compute-0 podman[84174]: 2025-10-10 09:45:20.380071482 +0000 UTC m=+0.130343758 container attach 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:20 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:20 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:20 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:20 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:20 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:20 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:21 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:21 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:21 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:21 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:21 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 10 09:45:22 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:22 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 10 09:45:22 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:22 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:22 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:23 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:23 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:23 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 10 09:45:23 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:23 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:23 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:23 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:23 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 10 09:45:24 compute-0 ceph-mon[73551]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : last_changed 2025-10-10T09:45:19.903599+0000
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.xkdepb(active, since 106s)
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 9e55e9a2-9b8b-4dae-afef-3435a3a81644 (Updating mon deployment (+2 -> 3))
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 9e55e9a2-9b8b-4dae-afef-3435a3a81644 (Updating mon deployment (+2 -> 3)) in 8 seconds
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:24 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 397d0327-4bac-433e-a1f5-3a5219dad7be (Updating mgr deployment (+2 -> 3))
Oct 10 09:45:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 10 09:45:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: Deploying daemon mon.compute-1 on compute-1
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0 calling monitor election
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-2 calling monitor election
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: monmap epoch 2
Oct 10 09:45:25 compute-0 ceph-mon[73551]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:25 compute-0 ceph-mon[73551]: last_changed 2025-10-10T09:45:19.903599+0000
Oct 10 09:45:25 compute-0 ceph-mon[73551]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:25 compute-0 ceph-mon[73551]: min_mon_release 19 (squid)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: election_strategy: 1
Oct 10 09:45:25 compute-0 ceph-mon[73551]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:25 compute-0 ceph-mon[73551]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:25 compute-0 ceph-mon[73551]: fsmap 
Oct 10 09:45:25 compute-0 ceph-mon[73551]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mgrmap e9: compute-0.xkdepb(active, since 106s)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:25 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:25 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:45:25 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:45:25 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:25 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 10 09:45:25 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/203412358; not ready for session (expect reconnect)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: Deploying daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct 10 09:45:26 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:26 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 10 09:45:26 compute-0 ceph-mon[73551]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:26 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:26 compute-0 ceph-mgr[73845]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct 10 09:45:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:45:26.907+0000 7ff1c5736640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct 10 09:45:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:27 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:27 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:27 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:27 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:27 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:27 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:27 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:28 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:28 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:28 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:28 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:28 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 3 completed events
Oct 10 09:45:28 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:45:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:29 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:29 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:29 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:29 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:29 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:29 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:29 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:30 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:30 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:30 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:30 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:30 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:30 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:30 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:30 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 10 09:45:31 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 10 09:45:31 compute-0 ceph-mon[73551]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : last_changed 2025-10-10T09:45:26.181993+0000
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.xkdepb(active, since 112s)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.rfugxc on compute-1
Oct 10 09:45:31 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.rfugxc on compute-1
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0 calling monitor election
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-2 calling monitor election
Oct 10 09:45:31 compute-0 ceph-mon[73551]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-1 calling monitor election
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: monmap epoch 3
Oct 10 09:45:31 compute-0 ceph-mon[73551]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:31 compute-0 ceph-mon[73551]: last_changed 2025-10-10T09:45:26.181993+0000
Oct 10 09:45:31 compute-0 ceph-mon[73551]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:31 compute-0 ceph-mon[73551]: min_mon_release 19 (squid)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: election_strategy: 1
Oct 10 09:45:31 compute-0 ceph-mon[73551]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:31 compute-0 ceph-mon[73551]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:31 compute-0 ceph-mon[73551]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 10 09:45:31 compute-0 ceph-mon[73551]: fsmap 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:31 compute-0 ceph-mon[73551]: mgrmap e9: compute-0.xkdepb(active, since 112s)
Oct 10 09:45:31 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/96631670; not ready for session (expect reconnect)
Oct 10 09:45:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:45:32 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:32 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mon[73551]: Deploying daemon mgr.compute-1.rfugxc on compute-1
Oct 10 09:45:32 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:45:33.201+0000 7ff1c5736640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 397d0327-4bac-433e-a1f5-3a5219dad7be (Updating mgr deployment (+2 -> 3))
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 397d0327-4bac-433e-a1f5-3a5219dad7be (Updating mgr deployment (+2 -> 3)) in 8 seconds
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 6aa91174-fe39-4d9a-8243-abce677c4d6d (Updating crash deployment (+1 -> 3))
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct 10 09:45:33 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:33 compute-0 ceph-mon[73551]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:45:33 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct 10 09:45:33 compute-0 objective_colden[84189]: pool 'vms' created
Oct 10 09:45:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct 10 09:45:33 compute-0 systemd[1]: libpod-4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c.scope: Deactivated successfully.
Oct 10 09:45:33 compute-0 podman[84174]: 2025-10-10 09:45:33.347830008 +0000 UTC m=+13.098102274 container died 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2436d15a0b288c42848fa74df475ff3088871b1732d85340b232c66ee4b33a4f-merged.mount: Deactivated successfully.
Oct 10 09:45:33 compute-0 podman[84174]: 2025-10-10 09:45:33.39352226 +0000 UTC m=+13.143794526 container remove 4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c (image=quay.io/ceph/ceph:v19, name=objective_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 09:45:33 compute-0 systemd[1]: libpod-conmon-4f340a94aaaeb657f10be1e58fde1a1e1e11f5f191cd5258f87e6d33b2c11f0c.scope: Deactivated successfully.
Oct 10 09:45:33 compute-0 sudo[84171]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:33 compute-0 sudo[84249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcoqtwbnpyywwyqyfcpwrmyhduqcbhhl ; /usr/bin/python3'
Oct 10 09:45:33 compute-0 sudo[84249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:33 compute-0 python3[84251]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:33 compute-0 podman[84252]: 2025-10-10 09:45:33.798938954 +0000 UTC m=+0.042539406 container create 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:33 compute-0 systemd[1]: Started libpod-conmon-0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12.scope.
Oct 10 09:45:33 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aaa29709f10765ed26decfb0af6ff266c1a2d06c015763b298457976432934e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aaa29709f10765ed26decfb0af6ff266c1a2d06c015763b298457976432934e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-0 podman[84252]: 2025-10-10 09:45:33.780206681 +0000 UTC m=+0.023807163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:33 compute-0 podman[84252]: 2025-10-10 09:45:33.889960733 +0000 UTC m=+0.133561275 container init 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:45:33 compute-0 podman[84252]: 2025-10-10 09:45:33.900177369 +0000 UTC m=+0.143777831 container start 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:33 compute-0 podman[84252]: 2025-10-10 09:45:33.903032975 +0000 UTC m=+0.146633537 container attach 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 09:45:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 10 09:45:34 compute-0 ceph-mon[73551]: Deploying daemon crash.compute-2 on compute-2
Oct 10 09:45:34 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:34 compute-0 ceph-mon[73551]: osdmap e16: 2 total, 2 up, 2 in
Oct 10 09:45:34 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct 10 09:45:34 compute-0 lucid_rosalind[84267]: pool 'volumes' created
Oct 10 09:45:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct 10 09:45:34 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:34 compute-0 systemd[1]: libpod-0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12.scope: Deactivated successfully.
Oct 10 09:45:34 compute-0 podman[84252]: 2025-10-10 09:45:34.374739835 +0000 UTC m=+0.618340297 container died 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aaa29709f10765ed26decfb0af6ff266c1a2d06c015763b298457976432934e-merged.mount: Deactivated successfully.
Oct 10 09:45:34 compute-0 podman[84252]: 2025-10-10 09:45:34.40753208 +0000 UTC m=+0.651132522 container remove 0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12 (image=quay.io/ceph/ceph:v19, name=lucid_rosalind, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:34 compute-0 sudo[84249]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:34 compute-0 systemd[1]: libpod-conmon-0a63bca8a754faf85251dd6eb138911f0ecd783548778f8f4aba3e74f962ef12.scope: Deactivated successfully.
Oct 10 09:45:34 compute-0 sudo[84329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqqquamkzkvfgiutkdcmrimktreadto ; /usr/bin/python3'
Oct 10 09:45:34 compute-0 sudo[84329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:34 compute-0 python3[84331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:34 compute-0 podman[84332]: 2025-10-10 09:45:34.769996316 +0000 UTC m=+0.045529327 container create ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:45:34 compute-0 systemd[1]: Started libpod-conmon-ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2.scope.
Oct 10 09:45:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fb3918bbc0d7fdb3e5919cf192208822f7f590696b2e769340aaa994b150fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fb3918bbc0d7fdb3e5919cf192208822f7f590696b2e769340aaa994b150fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:34 compute-0 podman[84332]: 2025-10-10 09:45:34.834865884 +0000 UTC m=+0.110398975 container init ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:34 compute-0 podman[84332]: 2025-10-10 09:45:34.839909624 +0000 UTC m=+0.115442635 container start ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 09:45:34 compute-0 podman[84332]: 2025-10-10 09:45:34.843271028 +0000 UTC m=+0.118804069 container attach ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:34 compute-0 podman[84332]: 2025-10-10 09:45:34.751779421 +0000 UTC m=+0.027312482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 6aa91174-fe39-4d9a-8243-abce677c4d6d (Updating crash deployment (+1 -> 3))
Oct 10 09:45:35 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 6aa91174-fe39-4d9a-8243-abce677c4d6d (Updating crash deployment (+1 -> 3)) in 2 seconds
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-0 sudo[84370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:35 compute-0 sudo[84370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:35 compute-0 sudo[84370]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:35 compute-0 sudo[84395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:45:35 compute-0 sudo[84395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:35 compute-0 ceph-mon[73551]: osdmap e17: 2 total, 2 up, 2 in
Oct 10 09:45:35 compute-0 ceph-mon[73551]: pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Oct 10 09:45:35 compute-0 intelligent_benz[84347]: pool 'backups' created
Oct 10 09:45:35 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Oct 10 09:45:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:35 compute-0 systemd[1]: libpod-ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2.scope: Deactivated successfully.
Oct 10 09:45:35 compute-0 podman[84332]: 2025-10-10 09:45:35.407801278 +0000 UTC m=+0.683334289 container died ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 09:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fb3918bbc0d7fdb3e5919cf192208822f7f590696b2e769340aaa994b150fa-merged.mount: Deactivated successfully.
Oct 10 09:45:35 compute-0 podman[84332]: 2025-10-10 09:45:35.442790959 +0000 UTC m=+0.718323960 container remove ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2 (image=quay.io/ceph/ceph:v19, name=intelligent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:35 compute-0 systemd[1]: libpod-conmon-ea510e6291e4b9ae6ab1ec4c0728c3ae360de8eece3826ddce1c8f50f530f4c2.scope: Deactivated successfully.
Oct 10 09:45:35 compute-0 sudo[84329]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:35 compute-0 sudo[84498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftoybluqfwqyftaezdkmtnuhlthxfggu ; /usr/bin/python3'
Oct 10 09:45:35 compute-0 sudo[84498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.656292589 +0000 UTC m=+0.045844216 container create 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:45:35 compute-0 systemd[1]: Started libpod-conmon-20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d.scope.
Oct 10 09:45:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.638268732 +0000 UTC m=+0.027820389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.736885639 +0000 UTC m=+0.126437306 container init 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 09:45:35 compute-0 python3[84500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.745748867 +0000 UTC m=+0.135300494 container start 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 09:45:35 compute-0 intelligent_spence[84518]: 167 167
Oct 10 09:45:35 compute-0 systemd[1]: libpod-20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d.scope: Deactivated successfully.
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.752549627 +0000 UTC m=+0.142101264 container attach 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.753616193 +0000 UTC m=+0.143167830 container died 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-eab5f869b2c2d79917321749f1712fd498f80dba4edda1b2b729ac7c3b37a9e1-merged.mount: Deactivated successfully.
Oct 10 09:45:35 compute-0 podman[84501]: 2025-10-10 09:45:35.789353228 +0000 UTC m=+0.178904855 container remove 20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_spence, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:35 compute-0 systemd[1]: libpod-conmon-20cb20b73e1bc5f5273cea51171788b15960b31aa3e2fd7cec5698e039f4fe4d.scope: Deactivated successfully.
Oct 10 09:45:35 compute-0 podman[84522]: 2025-10-10 09:45:35.815744208 +0000 UTC m=+0.056098114 container create 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:35 compute-0 systemd[1]: Started libpod-conmon-627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc.scope.
Oct 10 09:45:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04be468aa154955c02375b4c341aa9e7989cfd1a4df94028c3f482ffec7ae0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04be468aa154955c02375b4c341aa9e7989cfd1a4df94028c3f482ffec7ae0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:35 compute-0 podman[84522]: 2025-10-10 09:45:35.794378457 +0000 UTC m=+0.034732383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:35 compute-0 podman[84522]: 2025-10-10 09:45:35.89821782 +0000 UTC m=+0.138571756 container init 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:45:35 compute-0 podman[84522]: 2025-10-10 09:45:35.90443348 +0000 UTC m=+0.144787386 container start 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:45:35 compute-0 podman[84522]: 2025-10-10 09:45:35.907765811 +0000 UTC m=+0.148119747 container attach 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:35 compute-0 podman[84559]: 2025-10-10 09:45:35.953760053 +0000 UTC m=+0.046378965 container create efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:45:35 compute-0 systemd[1]: Started libpod-conmon-efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0.scope.
Oct 10 09:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:35.935249729 +0000 UTC m=+0.027868661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:36.045933052 +0000 UTC m=+0.138551994 container init efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:36.05505387 +0000 UTC m=+0.147672782 container start efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:36.058729284 +0000 UTC m=+0.151348226 container attach efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:36 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 5 completed events
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:36 compute-0 peaceful_jemison[84576]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:45:36 compute-0 peaceful_jemison[84576]: --> All data devices are unavailable
Oct 10 09:45:36 compute-0 ceph-mon[73551]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:36 compute-0 ceph-mon[73551]: osdmap e18: 2 total, 2 up, 2 in
Oct 10 09:45:36 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:36 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Oct 10 09:45:36 compute-0 naughty_burnell[84551]: pool 'images' created
Oct 10 09:45:36 compute-0 systemd[1]: libpod-efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0.scope: Deactivated successfully.
Oct 10 09:45:36 compute-0 conmon[84576]: conmon efdd5922e1a6f35520b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0.scope/container/memory.events
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:36.386981845 +0000 UTC m=+0.479600767 container died efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Oct 10 09:45:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:36 compute-0 systemd[1]: libpod-627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc.scope: Deactivated successfully.
Oct 10 09:45:36 compute-0 podman[84522]: 2025-10-10 09:45:36.40727999 +0000 UTC m=+0.647633896 container died 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e166dba6807121f323c5ab674a61873f765c29585bf24d3af66862e99331401-merged.mount: Deactivated successfully.
Oct 10 09:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04be468aa154955c02375b4c341aa9e7989cfd1a4df94028c3f482ffec7ae0f-merged.mount: Deactivated successfully.
Oct 10 09:45:36 compute-0 podman[84522]: 2025-10-10 09:45:36.449932098 +0000 UTC m=+0.690286004 container remove 627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc (image=quay.io/ceph/ceph:v19, name=naughty_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:45:36 compute-0 systemd[1]: libpod-conmon-627f1320b8eb43b84c6970be65b833e5a3d300389451332878e5bdb12a30dbcc.scope: Deactivated successfully.
Oct 10 09:45:36 compute-0 podman[84559]: 2025-10-10 09:45:36.459268003 +0000 UTC m=+0.551886915 container remove efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:45:36 compute-0 sudo[84498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:36 compute-0 systemd[1]: libpod-conmon-efdd5922e1a6f35520b78968e9c48399208fe37b5ded9b8a71e040bc094e5db0.scope: Deactivated successfully.
Oct 10 09:45:36 compute-0 sudo[84395]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:36 compute-0 sudo[84636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:36 compute-0 sudo[84636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:36 compute-0 sudo[84636]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:36 compute-0 sudo[84683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iserklbfhisviprjdejruuhumqbijkus ; /usr/bin/python3'
Oct 10 09:45:36 compute-0 sudo[84683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:36 compute-0 sudo[84685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:45:36 compute-0 sudo[84685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:36 compute-0 python3[84689]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:36 compute-0 podman[84712]: 2025-10-10 09:45:36.804163706 +0000 UTC m=+0.066910508 container create ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"} v 0)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]': finished
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct 10 09:45:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:36 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:36 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:36 compute-0 systemd[1]: Started libpod-conmon-ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c.scope.
Oct 10 09:45:36 compute-0 podman[84712]: 2025-10-10 09:45:36.781593334 +0000 UTC m=+0.044340136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47df43ebadbb000a856b30f8bb56754f620d379daec9c8dae1e01d15c4a34ee8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47df43ebadbb000a856b30f8bb56754f620d379daec9c8dae1e01d15c4a34ee8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:36 compute-0 podman[84712]: 2025-10-10 09:45:36.915581394 +0000 UTC m=+0.178328216 container init ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:36 compute-0 podman[84712]: 2025-10-10 09:45:36.924440953 +0000 UTC m=+0.187187725 container start ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:36 compute-0 podman[84712]: 2025-10-10 09:45:36.927651351 +0000 UTC m=+0.190398123 container attach ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:45:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v72: 5 pgs: 4 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.077105932 +0000 UTC m=+0.060348097 container create 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:45:37 compute-0 systemd[1]: Started libpod-conmon-8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224.scope.
Oct 10 09:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.053396742 +0000 UTC m=+0.036638937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.156392266 +0000 UTC m=+0.139634461 container init 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.166066863 +0000 UTC m=+0.149309058 container start 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.169752657 +0000 UTC m=+0.152994862 container attach 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:37 compute-0 hungry_perlman[84806]: 167 167
Oct 10 09:45:37 compute-0 systemd[1]: libpod-8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.172841492 +0000 UTC m=+0.156083687 container died 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6535a501ced15699fbb2b39f4959d7a06c82faa6610bf511ffd7f4c158105a2f-merged.mount: Deactivated successfully.
Oct 10 09:45:37 compute-0 podman[84771]: 2025-10-10 09:45:37.218540542 +0000 UTC m=+0.201782707 container remove 8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_perlman, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:45:37 compute-0 systemd[1]: libpod-conmon-8932e65ba8ce520c20624ec49f128419b9e46dbb46d6b2d58376cf5f84118224.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:37 compute-0 ceph-mon[73551]: osdmap e19: 2 total, 2 up, 2 in
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3277074974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]': finished
Oct 10 09:45:37 compute-0 ceph-mon[73551]: osdmap e20: 3 total, 2 up, 3 in
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:37 compute-0 ceph-mon[73551]: pgmap v72: 5 pgs: 4 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:37 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.434853828 +0000 UTC m=+0.053603189 container create 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:37 compute-0 systemd[1]: Started libpod-conmon-9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63.scope.
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.405956164 +0000 UTC m=+0.024705605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e4ad209dfa5fe4dc4efb0e61753bfbc86ab86d14de15cf39c0523bd8c16bd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e4ad209dfa5fe4dc4efb0e61753bfbc86ab86d14de15cf39c0523bd8c16bd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e4ad209dfa5fe4dc4efb0e61753bfbc86ab86d14de15cf39c0523bd8c16bd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e4ad209dfa5fe4dc4efb0e61753bfbc86ab86d14de15cf39c0523bd8c16bd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.523171387 +0000 UTC m=+0.141920778 container init 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.529352366 +0000 UTC m=+0.148101727 container start 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.53273075 +0000 UTC m=+0.151480141 container attach 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:37 compute-0 jolly_wilson[84849]: {
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:     "0": [
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:         {
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "devices": [
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "/dev/loop3"
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             ],
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "lv_name": "ceph_lv0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "lv_size": "21470642176",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "name": "ceph_lv0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "tags": {
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.cluster_name": "ceph",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.crush_device_class": "",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.encrypted": "0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.osd_id": "0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.type": "block",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.vdo": "0",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:                 "ceph.with_tpm": "0"
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             },
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "type": "block",
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:             "vg_name": "ceph_vg0"
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:         }
Oct 10 09:45:37 compute-0 jolly_wilson[84849]:     ]
Oct 10 09:45:37 compute-0 jolly_wilson[84849]: }
Oct 10 09:45:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 10 09:45:37 compute-0 systemd[1]: libpod-9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.853644303 +0000 UTC m=+0.472393664 container died 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:45:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct 10 09:45:37 compute-0 blissful_lumiere[84739]: pool 'cephfs.cephfs.meta' created
Oct 10 09:45:37 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct 10 09:45:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:37 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:37 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0e4ad209dfa5fe4dc4efb0e61753bfbc86ab86d14de15cf39c0523bd8c16bd8-merged.mount: Deactivated successfully.
Oct 10 09:45:37 compute-0 systemd[1]: libpod-ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 podman[84712]: 2025-10-10 09:45:37.901838969 +0000 UTC m=+1.164585751 container died ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:45:37 compute-0 podman[84833]: 2025-10-10 09:45:37.926233832 +0000 UTC m=+0.544983193 container remove 9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wilson, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-47df43ebadbb000a856b30f8bb56754f620d379daec9c8dae1e01d15c4a34ee8-merged.mount: Deactivated successfully.
Oct 10 09:45:37 compute-0 podman[84712]: 2025-10-10 09:45:37.952574631 +0000 UTC m=+1.215321413 container remove ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c (image=quay.io/ceph/ceph:v19, name=blissful_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:45:37 compute-0 systemd[1]: libpod-conmon-9ab66dffbf4411d1a73f75033129e87ff68f8bcab33a39297e0aaf8cef08ea63.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 sudo[84685]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:37 compute-0 systemd[1]: libpod-conmon-ff645ac4e1904fd7906e4ea86194f37e5cd3e4e5ba2bd93e38179e0a6cb3324c.scope: Deactivated successfully.
Oct 10 09:45:37 compute-0 sudo[84683]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:38 compute-0 sudo[84883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:38 compute-0 sudo[84883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:38 compute-0 sudo[84883]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:38 compute-0 sudo[84909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:45:38 compute-0 sudo[84954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeygfefxzcvzxneputyarnoxiuplicdb ; /usr/bin/python3'
Oct 10 09:45:38 compute-0 sudo[84909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:38 compute-0 sudo[84954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:38 compute-0 python3[84958]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:45:38
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [balancer INFO root] Some PGs (0.666667) are unknown; try again later
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.290049034 +0000 UTC m=+0.059033103 container create c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:45:38 compute-0 systemd[1]: Started libpod-conmon-c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8.scope.
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.260416644 +0000 UTC m=+0.029400763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5463d31c658e475b1676bbe1146086bc3cf66f7a37cf6f7f1f98e12deaa3566/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5463d31c658e475b1676bbe1146086bc3cf66f7a37cf6f7f1f98e12deaa3566/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.375440233 +0000 UTC m=+0.144424322 container init c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.381946213 +0000 UTC m=+0.150930282 container start c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.385623586 +0000 UTC m=+0.154607655 container attach c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1014583551' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:45:38 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:38 compute-0 ceph-mon[73551]: osdmap e21: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:38 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.523518238 +0000 UTC m=+0.047533945 container create e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:45:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:38 compute-0 systemd[1]: Started libpod-conmon-e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001.scope.
Oct 10 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.502676725 +0000 UTC m=+0.026692452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.60895475 +0000 UTC m=+0.132970477 container init e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.615938605 +0000 UTC m=+0.139954302 container start e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:45:38 compute-0 cranky_keller[85049]: 167 167
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.619739923 +0000 UTC m=+0.143755640 container attach e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:45:38 compute-0 systemd[1]: libpod-e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001.scope: Deactivated successfully.
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.621313846 +0000 UTC m=+0.145329563 container died e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-90480d947bf01bcf8918f5c08636f24aa3622d8decce7ac975bc5e920aba024a-merged.mount: Deactivated successfully.
Oct 10 09:45:38 compute-0 podman[85014]: 2025-10-10 09:45:38.661499022 +0000 UTC m=+0.185514719 container remove e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:38 compute-0 systemd[1]: libpod-conmon-e66ed9ab6a6ba0facc0b0669b58f81282873fc9101e8f4e33270f13758b43001.scope: Deactivated successfully.
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:38 compute-0 podman[85073]: 2025-10-10 09:45:38.828617708 +0000 UTC m=+0.042825345 container create 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-0 kind_elion[84996]: pool 'cephfs.cephfs.data' created
Oct 10 09:45:38 compute-0 systemd[1]: Started libpod-conmon-86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a.scope.
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev b26debb8-78b5-4262-96df-807d761d341b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.896170387 +0000 UTC m=+0.665154466 container died c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 09:45:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:38 compute-0 systemd[1]: libpod-c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8.scope: Deactivated successfully.
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4d349b6fcbe3fdb1cc4c1cb86249b9983417ccddf9d567242417d55b1f84e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4d349b6fcbe3fdb1cc4c1cb86249b9983417ccddf9d567242417d55b1f84e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4d349b6fcbe3fdb1cc4c1cb86249b9983417ccddf9d567242417d55b1f84e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4d349b6fcbe3fdb1cc4c1cb86249b9983417ccddf9d567242417d55b1f84e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:38 compute-0 podman[85073]: 2025-10-10 09:45:38.811543062 +0000 UTC m=+0.025750709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:38 compute-0 podman[85073]: 2025-10-10 09:45:38.920419254 +0000 UTC m=+0.134626941 container init 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp started
Oct 10 09:45:38 compute-0 podman[85073]: 2025-10-10 09:45:38.928183987 +0000 UTC m=+0.142391644 container start 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mgr.compute-2.gkrssp 192.168.122.102:0/2295308695; not ready for session (expect reconnect)
Oct 10 09:45:38 compute-0 podman[85073]: 2025-10-10 09:45:38.932388468 +0000 UTC m=+0.146596155 container attach 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5463d31c658e475b1676bbe1146086bc3cf66f7a37cf6f7f1f98e12deaa3566-merged.mount: Deactivated successfully.
Oct 10 09:45:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 2 active+clean, 5 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:45:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:38 compute-0 podman[84959]: 2025-10-10 09:45:38.957798505 +0000 UTC m=+0.726782584 container remove c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8 (image=quay.io/ceph/ceph:v19, name=kind_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:45:38 compute-0 systemd[1]: libpod-conmon-c8b1245d9157d887537e4e9d19a9248fe394801a95385aea67643e51e6073fe8.scope: Deactivated successfully.
Oct 10 09:45:38 compute-0 sudo[84954]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:39 compute-0 sudo[85133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fypbsyqcthdpxpzwmrpnkncreptbpnci ; /usr/bin/python3'
Oct 10 09:45:39 compute-0 sudo[85133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc started
Oct 10 09:45:39 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from mgr.compute-1.rfugxc 192.168.122.101:0/2764951732; not ready for session (expect reconnect)
Oct 10 09:45:39 compute-0 python3[85140]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.354142873 +0000 UTC m=+0.057658395 container create 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:39 compute-0 ceph-mon[73551]: osdmap e22: 3 total, 2 up, 3 in
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:45:39 compute-0 ceph-mon[73551]: pgmap v75: 7 pgs: 2 active+clean, 5 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:39 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:45:39 compute-0 systemd[1]: Started libpod-conmon-5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f.scope.
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.333215668 +0000 UTC m=+0.036731210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8818e2d1ecf1c6342638d88571e4efe625d7e89b94426e1c5f0b3eb29eeab09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8818e2d1ecf1c6342638d88571e4efe625d7e89b94426e1c5f0b3eb29eeab09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.463442 +0000 UTC m=+0.166957522 container init 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.471859904 +0000 UTC m=+0.175375416 container start 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.475684953 +0000 UTC m=+0.179200485 container attach 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:39 compute-0 lvm[85242]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:45:39 compute-0 lvm[85242]: VG ceph_vg0 finished
Oct 10 09:45:39 compute-0 compassionate_edison[85092]: {}
Oct 10 09:45:39 compute-0 systemd[1]: libpod-86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a.scope: Deactivated successfully.
Oct 10 09:45:39 compute-0 systemd[1]: libpod-86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a.scope: Consumed 1.281s CPU time.
Oct 10 09:45:39 compute-0 podman[85073]: 2025-10-10 09:45:39.727678813 +0000 UTC m=+0.941886470 container died 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c4d349b6fcbe3fdb1cc4c1cb86249b9983417ccddf9d567242417d55b1f84e0-merged.mount: Deactivated successfully.
Oct 10 09:45:39 compute-0 podman[85073]: 2025-10-10 09:45:39.772585247 +0000 UTC m=+0.986792894 container remove 86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Oct 10 09:45:39 compute-0 systemd[1]: libpod-conmon-86398f366e0fd25275ef4ab6c601890701550cffb772e03d0e5e3fe533151a3a.scope: Deactivated successfully.
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 09:45:39 compute-0 sudo[84909]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct 10 09:45:39 compute-0 trusting_lovelace[85202]: enabled application 'rbd' on pool 'vms'
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"} v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"} v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:45:39 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:39 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 6a5d5322-8f1b-4a20-a309-b647486d6f7d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:45:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:39 compute-0 systemd[1]: libpod-5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f.scope: Deactivated successfully.
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.92118836 +0000 UTC m=+0.624703922 container died 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8818e2d1ecf1c6342638d88571e4efe625d7e89b94426e1c5f0b3eb29eeab09-merged.mount: Deactivated successfully.
Oct 10 09:45:39 compute-0 podman[85160]: 2025-10-10 09:45:39.964292533 +0000 UTC m=+0.667808055 container remove 5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f (image=quay.io/ceph/ceph:v19, name=trusting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:39 compute-0 systemd[1]: libpod-conmon-5c19d4b5e06d54ce269fa0a21b6b63a1f011547167162e7fb5ee5cdfd0abfc1f.scope: Deactivated successfully.
Oct 10 09:45:39 compute-0 sudo[85133]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:40 compute-0 sudo[85296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqklolrnzrdzffetgnlmoadfedndbwet ; /usr/bin/python3'
Oct 10 09:45:40 compute-0 sudo[85296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:40 compute-0 python3[85298]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.415707779 +0000 UTC m=+0.063442951 container create 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 09:45:40 compute-0 ceph-mon[73551]: osdmap e23: 3 total, 2 up, 3 in
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mgrmap e10: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:40 compute-0 systemd[1]: Started libpod-conmon-162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff.scope.
Oct 10 09:45:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7627fc1b6bb304609e1f3b5c46ab9f40cb441cdc3e34f490429a36d3facaef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7627fc1b6bb304609e1f3b5c46ab9f40cb441cdc3e34f490429a36d3facaef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.395487337 +0000 UTC m=+0.043222529 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.499549477 +0000 UTC m=+0.147284669 container init 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.50911 +0000 UTC m=+0.156845172 container start 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.512542776 +0000 UTC m=+0.160277938 container attach 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct 10 09:45:40 compute-0 clever_lamport[85315]: enabled application 'rbd' on pool 'volumes'
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:40 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 346d3aa9-61fe-402c-9407-f2ce16a7fb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:40 compute-0 systemd[1]: libpod-162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff.scope: Deactivated successfully.
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.931930231 +0000 UTC m=+0.579665403 container died 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v78: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:45:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7627fc1b6bb304609e1f3b5c46ab9f40cb441cdc3e34f490429a36d3facaef-merged.mount: Deactivated successfully.
Oct 10 09:45:40 compute-0 podman[85299]: 2025-10-10 09:45:40.986635555 +0000 UTC m=+0.634370757 container remove 162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff (image=quay.io/ceph/ceph:v19, name=clever_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:45:40 compute-0 systemd[1]: libpod-conmon-162aa65da7d26c612b78d55cab8612ddc67060981edab7a936d78d285f67b0ff.scope: Deactivated successfully.
Oct 10 09:45:41 compute-0 sudo[85296]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:41 compute-0 sudo[85373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddnrupczfvrodesnoqwqfqvcyxsdrmmb ; /usr/bin/python3'
Oct 10 09:45:41 compute-0 sudo[85373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:41 compute-0 ceph-mgr[73845]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Oct 10 09:45:41 compute-0 python3[85375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.422200937 +0000 UTC m=+0.053421433 container create 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: osdmap e24: 3 total, 2 up, 3 in
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: 2.1e scrub starts
Oct 10 09:45:41 compute-0 ceph-mon[73551]: pgmap v78: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: 2.1e scrub ok
Oct 10 09:45:41 compute-0 systemd[1]: Started libpod-conmon-902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a.scope.
Oct 10 09:45:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b07865342325bb183a819d41af8d912ff1dcbd5a11901b01ba8f59b6ed93e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b07865342325bb183a819d41af8d912ff1dcbd5a11901b01ba8f59b6ed93e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.40273277 +0000 UTC m=+0.033953256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.500120645 +0000 UTC m=+0.131341121 container init 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.505336911 +0000 UTC m=+0.136557377 container start 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.508836199 +0000 UTC m=+0.140056675 container attach 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 09:45:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct 10 09:45:41 compute-0 recursing_benz[85391]: enabled application 'rbd' on pool 'backups'
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct 10 09:45:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:41 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:41 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 9895f6cb-c44f-4b73-8d0d-25758150ddd3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:45:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=9.459169388s) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active pruub 57.972309113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=10.476861000s) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active pruub 58.990211487s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=9.459169388s) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown pruub 57.972309113s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=10.476861000s) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown pruub 58.990211487s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:41 compute-0 systemd[1]: libpod-902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a.scope: Deactivated successfully.
Oct 10 09:45:41 compute-0 podman[85376]: 2025-10-10 09:45:41.955889697 +0000 UTC m=+0.587110173 container died 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0b07865342325bb183a819d41af8d912ff1dcbd5a11901b01ba8f59b6ed93e6-merged.mount: Deactivated successfully.
Oct 10 09:45:42 compute-0 podman[85376]: 2025-10-10 09:45:42.011795063 +0000 UTC m=+0.643015509 container remove 902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a (image=quay.io/ceph/ceph:v19, name=recursing_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:42 compute-0 systemd[1]: libpod-conmon-902c46ee8e9785da3b0db152acfa8843bdb6e5c3d3701f93db57c0791f1d2e5a.scope: Deactivated successfully.
Oct 10 09:45:42 compute-0 sudo[85373]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:42 compute-0 sudo[85450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkywdfszqrrlclqmcligzjvltrhbvkhn ; /usr/bin/python3'
Oct 10 09:45:42 compute-0 sudo[85450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:42 compute-0 python3[85452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 09:45:42 compute-0 ceph-mon[73551]: osdmap e25: 3 total, 2 up, 3 in
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: 2.1d deep-scrub starts
Oct 10 09:45:42 compute-0 ceph-mon[73551]: 2.1d deep-scrub ok
Oct 10 09:45:42 compute-0 podman[85453]: 2025-10-10 09:45:42.456709299 +0000 UTC m=+0.064398643 container create ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 09:45:42 compute-0 systemd[1]: Started libpod-conmon-ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d.scope.
Oct 10 09:45:42 compute-0 podman[85453]: 2025-10-10 09:45:42.421607646 +0000 UTC m=+0.029297070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0267be2578b1928c469667fdbd256fb965f5064f6171787464d8154353261c33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0267be2578b1928c469667fdbd256fb965f5064f6171787464d8154353261c33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:42 compute-0 podman[85453]: 2025-10-10 09:45:42.552094147 +0000 UTC m=+0.159783581 container init ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:42 compute-0 podman[85453]: 2025-10-10 09:45:42.559169666 +0000 UTC m=+0.166859040 container start ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:42 compute-0 podman[85453]: 2025-10-10 09:45:42.563012345 +0000 UTC m=+0.170701719 container attach ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 860d6a54-b361-4e22-b68a-0d6223daab66 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.19( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.18( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.17( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.16( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.15( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.14( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.13( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.12( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.11( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.10( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev b26debb8-78b5-4262-96df-807d761d341b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event b26debb8-78b5-4262-96df-807d761d341b (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 6a5d5322-8f1b-4a20-a309-b647486d6f7d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 6a5d5322-8f1b-4a20-a309-b647486d6f7d (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 346d3aa9-61fe-402c-9407-f2ce16a7fb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 346d3aa9-61fe-402c-9407-f2ce16a7fb2b (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 9895f6cb-c44f-4b73-8d0d-25758150ddd3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 9895f6cb-c44f-4b73-8d0d-25758150ddd3 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 860d6a54-b361-4e22-b68a-0d6223daab66 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 860d6a54-b361-4e22-b68a-0d6223daab66 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 0 seconds
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.7( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.6( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.5( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.2( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.3( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.4( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.8( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.9( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.10( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.11( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.16( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=25/26 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v81: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:45:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [0] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.4( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:42 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 26 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [0] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mon[73551]: Deploying daemon osd.2 on compute-2
Oct 10 09:45:43 compute-0 ceph-mon[73551]: 2.1f scrub starts
Oct 10 09:45:43 compute-0 ceph-mon[73551]: 2.1f scrub ok
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:43 compute-0 ceph-mon[73551]: osdmap e26: 3 total, 2 up, 3 in
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mon[73551]: pgmap v81: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 10 09:45:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 10 09:45:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 10 09:45:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 09:45:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct 10 09:45:43 compute-0 ecstatic_khayyam[85468]: enabled application 'rbd' on pool 'images'
Oct 10 09:45:43 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct 10 09:45:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:43 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:43 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:43 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=10.930658340s) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.456718445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:43 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=8.900621414s) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 59.426700592s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:43 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=10.930658340s) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown pruub 61.456718445s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:43 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=8.900621414s) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown pruub 59.426700592s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:43 compute-0 systemd[1]: libpod-ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d.scope: Deactivated successfully.
Oct 10 09:45:43 compute-0 podman[85453]: 2025-10-10 09:45:43.965110616 +0000 UTC m=+1.572800040 container died ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0267be2578b1928c469667fdbd256fb965f5064f6171787464d8154353261c33-merged.mount: Deactivated successfully.
Oct 10 09:45:44 compute-0 systemd[74886]: Starting Mark boot as successful...
Oct 10 09:45:44 compute-0 systemd[74886]: Finished Mark boot as successful.
Oct 10 09:45:44 compute-0 podman[85453]: 2025-10-10 09:45:44.006903965 +0000 UTC m=+1.614593309 container remove ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d (image=quay.io/ceph/ceph:v19, name=ecstatic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:45:44 compute-0 systemd[1]: libpod-conmon-ea1831b086e55c9d130d874a23c11536d9e85f3c534078bb5867eca337f9ec0d.scope: Deactivated successfully.
Oct 10 09:45:44 compute-0 sudo[85450]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:44 compute-0 sudo[85530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfobvgxigefjuhdldojmlhscdmrscax ; /usr/bin/python3'
Oct 10 09:45:44 compute-0 sudo[85530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:44 compute-0 python3[85532]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:44 compute-0 podman[85533]: 2025-10-10 09:45:44.429747874 +0000 UTC m=+0.062259727 container create 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:44 compute-0 systemd[1]: Started libpod-conmon-20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c.scope.
Oct 10 09:45:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:44 compute-0 podman[85533]: 2025-10-10 09:45:44.409924075 +0000 UTC m=+0.042435938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed49ce46575dcadfc966ab1a8bde7d291bc67db275d29678f992ae439f515a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed49ce46575dcadfc966ab1a8bde7d291bc67db275d29678f992ae439f515a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:44 compute-0 podman[85533]: 2025-10-10 09:45:44.5216869 +0000 UTC m=+0.154198843 container init 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 09:45:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 10 09:45:44 compute-0 podman[85533]: 2025-10-10 09:45:44.528695891 +0000 UTC m=+0.161207744 container start 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:44 compute-0 podman[85533]: 2025-10-10 09:45:44.532262583 +0000 UTC m=+0.164774466 container attach 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 10 09:45:44 compute-0 ceph-mon[73551]: 4.1e scrub starts
Oct 10 09:45:44 compute-0 ceph-mon[73551]: 4.1e scrub ok
Oct 10 09:45:44 compute-0 ceph-mon[73551]: 2.9 scrub starts
Oct 10 09:45:44 compute-0 ceph-mon[73551]: 2.9 scrub ok
Oct 10 09:45:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 09:45:44 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:44 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:44 compute-0 ceph-mon[73551]: osdmap e27: 3 total, 2 up, 3 in
Oct 10 09:45:44 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v83: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 10 09:45:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct 10 09:45:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 09:45:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct 10 09:45:44 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct 10 09:45:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:44 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.19( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1a( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.18( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1b( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.18( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.19( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1e( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1f( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.c( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.d( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.2( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.5( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.6( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.4( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.7( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.7( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.4( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.3( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.3( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.2( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.6( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.c( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.f( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.d( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.e( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.a( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.9( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.b( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.8( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.5( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.8( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.b( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.9( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.a( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.17( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.14( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.15( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.16( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.14( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.17( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.15( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.16( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.12( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.11( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.13( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.10( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.10( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.12( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.11( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1e( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.13( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1f( empty local-lis/les=19/20 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1d( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1c( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.19( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.19( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.18( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.5( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.6( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.4( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.3( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.2( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.6( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.9( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.14( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.17( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.16( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.14( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.15( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.16( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.11( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.10( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[5.1e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:44 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 28 pg[6.13( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 10 09:45:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 10 09:45:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 10 09:45:46 compute-0 ceph-mon[73551]: 4.1f scrub starts
Oct 10 09:45:46 compute-0 ceph-mon[73551]: 4.1f scrub ok
Oct 10 09:45:46 compute-0 ceph-mon[73551]: 2.1c scrub starts
Oct 10 09:45:46 compute-0 ceph-mon[73551]: 2.1c scrub ok
Oct 10 09:45:46 compute-0 ceph-mon[73551]: pgmap v83: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 09:45:46 compute-0 ceph-mon[73551]: osdmap e28: 3 total, 2 up, 3 in
Oct 10 09:45:46 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 09:45:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct 10 09:45:46 compute-0 objective_kowalevski[85548]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct 10 09:45:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:46 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:46 compute-0 systemd[1]: libpod-20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c.scope: Deactivated successfully.
Oct 10 09:45:46 compute-0 podman[85533]: 2025-10-10 09:45:46.221816761 +0000 UTC m=+1.854328624 container died 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:46 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 10 completed events
Oct 10 09:45:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bed49ce46575dcadfc966ab1a8bde7d291bc67db275d29678f992ae439f515a1-merged.mount: Deactivated successfully.
Oct 10 09:45:46 compute-0 podman[85533]: 2025-10-10 09:45:46.301843009 +0000 UTC m=+1.934354862 container remove 20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c (image=quay.io/ceph/ceph:v19, name=objective_kowalevski, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:46 compute-0 systemd[1]: libpod-conmon-20a80c814e3a92d7e0689fc955aada7fbef3391e9e2ab4646fc517cb35c05f6c.scope: Deactivated successfully.
Oct 10 09:45:46 compute-0 sudo[85530]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:46 compute-0 sudo[85608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvdtnhvewdvzhqzuoggihwgkzkqtwgex ; /usr/bin/python3'
Oct 10 09:45:46 compute-0 sudo[85608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 10 09:45:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 10 09:45:46 compute-0 python3[85610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:46 compute-0 podman[85611]: 2025-10-10 09:45:46.748782739 +0000 UTC m=+0.039954833 container create fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:46 compute-0 systemd[1]: Started libpod-conmon-fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36.scope.
Oct 10 09:45:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf4479624122785307181b395142fb621ca7f752c4ef59ce2f9831225fc7ecc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf4479624122785307181b395142fb621ca7f752c4ef59ce2f9831225fc7ecc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:46 compute-0 podman[85611]: 2025-10-10 09:45:46.729655583 +0000 UTC m=+0.020827707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:46 compute-0 podman[85611]: 2025-10-10 09:45:46.83273049 +0000 UTC m=+0.123902614 container init fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:46 compute-0 podman[85611]: 2025-10-10 09:45:46.840931182 +0000 UTC m=+0.132103286 container start fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:45:46 compute-0 podman[85611]: 2025-10-10 09:45:46.843996767 +0000 UTC m=+0.135168871 container attach fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v86: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:47 compute-0 ceph-mon[73551]: 3.18 scrub starts
Oct 10 09:45:47 compute-0 ceph-mon[73551]: 3.18 scrub ok
Oct 10 09:45:47 compute-0 ceph-mon[73551]: 2.8 scrub starts
Oct 10 09:45:47 compute-0 ceph-mon[73551]: 2.8 scrub ok
Oct 10 09:45:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 09:45:47 compute-0 ceph-mon[73551]: osdmap e29: 3 total, 2 up, 3 in
Oct 10 09:45:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct 10 09:45:47 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 09:45:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 10 09:45:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 10 09:45:47 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 10 09:45:48 compute-0 ceph-mon[73551]: 3.17 scrub starts
Oct 10 09:45:48 compute-0 ceph-mon[73551]: 3.17 scrub ok
Oct 10 09:45:48 compute-0 ceph-mon[73551]: 2.7 scrub starts
Oct 10 09:45:48 compute-0 ceph-mon[73551]: 2.7 scrub ok
Oct 10 09:45:48 compute-0 ceph-mon[73551]: pgmap v86: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 09:45:48 compute-0 ceph-mon[73551]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct 10 09:45:48 compute-0 xenodochial_knuth[85626]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 10 09:45:48 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:48 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:48 compute-0 systemd[1]: libpod-fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36.scope: Deactivated successfully.
Oct 10 09:45:48 compute-0 podman[85611]: 2025-10-10 09:45:48.079909686 +0000 UTC m=+1.371081810 container died fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cf4479624122785307181b395142fb621ca7f752c4ef59ce2f9831225fc7ecc-merged.mount: Deactivated successfully.
Oct 10 09:45:48 compute-0 podman[85611]: 2025-10-10 09:45:48.120178068 +0000 UTC m=+1.411350192 container remove fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36 (image=quay.io/ceph/ceph:v19, name=xenodochial_knuth, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:48 compute-0 systemd[1]: libpod-conmon-fad7c073e8d27bbe90939d7d0e5bc2cf39201a80228e9c35e68075f4d1692c36.scope: Deactivated successfully.
Oct 10 09:45:48 compute-0 sudo[85608]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 10 09:45:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:48 compute-0 sudo[85661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:45:48 compute-0 sudo[85661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:48 compute-0 sudo[85661]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v88: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:49 compute-0 ceph-mon[73551]: 4.10 scrub starts
Oct 10 09:45:49 compute-0 ceph-mon[73551]: 4.10 scrub ok
Oct 10 09:45:49 compute-0 ceph-mon[73551]: 2.a scrub starts
Oct 10 09:45:49 compute-0 ceph-mon[73551]: 2.a scrub ok
Oct 10 09:45:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 09:45:49 compute-0 ceph-mon[73551]: osdmap e30: 3 total, 2 up, 3 in
Oct 10 09:45:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:49 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:49 compute-0 python3[85762]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:45:49 compute-0 python3[85833]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089548.819387-33850-15738394517052/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:45:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Oct 10 09:45:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Oct 10 09:45:49 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:49 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 09:45:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct 10 09:45:49 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 09:45:50 compute-0 sudo[85933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxmktkwqtzvhoqtgekzymorrbvqkdyco ; /usr/bin/python3'
Oct 10 09:45:50 compute-0 sudo[85933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 3.16 deep-scrub starts
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 3.16 deep-scrub ok
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 2.4 scrub starts
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 2.4 scrub ok
Oct 10 09:45:50 compute-0 ceph-mon[73551]: pgmap v88: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:50 compute-0 ceph-mon[73551]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: Cluster is now healthy
Oct 10 09:45:50 compute-0 ceph-mon[73551]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 2.1 scrub starts
Oct 10 09:45:50 compute-0 ceph-mon[73551]: 2.1 scrub ok
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:50 compute-0 python3[85935]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:45:50 compute-0 sudo[85933]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:50 compute-0 sudo[86008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfohpoogihnadsulanzgtwehnxghaly ; /usr/bin/python3'
Oct 10 09:45:50 compute-0 sudo[86008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Oct 10 09:45:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Oct 10 09:45:50 compute-0 python3[86010]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089549.8302555-33864-132772820500707/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=ea4992418032e3c76346ae6c06e2e33d6c2b2a3c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:45:50 compute-0 sudo[86008]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct 10 09:45:50 compute-0 sudo[86058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofezbrqjbdxxkvdmmrzdgubsxnlkwft ; /usr/bin/python3'
Oct 10 09:45:50 compute-0 sudo[86058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v90: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:45:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:50 compute-0 python3[86060]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.030259779 +0000 UTC m=+0.040066557 container create e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:45:51 compute-0 systemd[1]: Started libpod-conmon-e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf.scope.
Oct 10 09:45:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.012882952 +0000 UTC m=+0.022689750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d134b36c04a63e0bc6e03fa5d7cf2ce616bb7775bc42082e59475082cc8212a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d134b36c04a63e0bc6e03fa5d7cf2ce616bb7775bc42082e59475082cc8212a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d134b36c04a63e0bc6e03fa5d7cf2ce616bb7775bc42082e59475082cc8212a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: 4.11 deep-scrub starts
Oct 10 09:45:51 compute-0 ceph-mon[73551]: 4.11 deep-scrub ok
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: osdmap e31: 3 total, 2 up, 3 in
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: 2.0 scrub starts
Oct 10 09:45:51 compute-0 ceph-mon[73551]: 2.0 scrub ok
Oct 10 09:45:51 compute-0 ceph-mon[73551]: pgmap v90: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.142574734 +0000 UTC m=+0.152381562 container init e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.150407443 +0000 UTC m=+0.160214231 container start e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.154478732 +0000 UTC m=+0.164285540 container attach e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event c60f8eba-1870-44d9-aace-59c960fa48ce (Global Recovery Event) in 10 seconds
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 09:45:51 compute-0 youthful_jepsen[86077]: 
Oct 10 09:45:51 compute-0 youthful_jepsen[86077]: [global]
Oct 10 09:45:51 compute-0 youthful_jepsen[86077]:         fsid = 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:51 compute-0 youthful_jepsen[86077]:         mon_host = 192.168.122.100
Oct 10 09:45:51 compute-0 systemd[1]: libpod-e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf.scope: Deactivated successfully.
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.522235854 +0000 UTC m=+0.532042632 container died e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d134b36c04a63e0bc6e03fa5d7cf2ce616bb7775bc42082e59475082cc8212a-merged.mount: Deactivated successfully.
Oct 10 09:45:51 compute-0 podman[86061]: 2025-10-10 09:45:51.559199943 +0000 UTC m=+0.569006721 container remove e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:51 compute-0 systemd[1]: libpod-conmon-e1a4ce048aeb8fa0f6b60d26ddd9cba9e2c61a474e289d851c42e3bf972f3fbf.scope: Deactivated successfully.
Oct 10 09:45:51 compute-0 sudo[86058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 10 09:45:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/269809354; not ready for session (expect reconnect)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:51 compute-0 sudo[86137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpnmcgrpvswysrwauyhzozijzgmqamw ; /usr/bin/python3'
Oct 10 09:45:51 compute-0 sudo[86137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.e( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.1( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[2.1e( empty local-lis/les=0/0 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.136394501s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.529747009s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.136360168s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.529747009s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169381142s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.562927246s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.135130882s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.528808594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169316292s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563003540s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169227600s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.562919617s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.135130882s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528808594s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169316292s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563003540s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169198036s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.562919617s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.134947777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.528816223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.134947777s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528816223s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169157028s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563072205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.169138908s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563072205s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.136138916s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.530113220s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.136104584s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.530113220s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.134448051s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.528511047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.134430885s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528511047s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168951988s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563049316s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168951988s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563049316s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133416176s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527626038s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133506775s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527717590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168873787s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563079834s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133506775s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527717590s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133416176s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527626038s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168848991s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563079834s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168828011s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563163757s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133074760s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527427673s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168828011s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563163757s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133074760s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527427673s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133088112s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527526855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133088112s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527526855s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168684959s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563224792s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168499947s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.562927246s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.135245323s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.529815674s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168658257s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563224792s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.135245323s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.529815674s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.132550240s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527145386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.132526398s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527145386s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168623924s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563316345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168609619s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563316345s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168560028s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.132160187s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526962280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.132160187s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526962280s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168560028s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133692741s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.528579712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168432236s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563323975s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168410301s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563323975s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.133677483s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528579712s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131930351s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526931763s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131930351s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526931763s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168340683s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563415527s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168316841s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563415527s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131214142s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526451111s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131196022s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526451111s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168192863s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563491821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168039322s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168168068s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563476562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168039322s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168168068s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563476562s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.168081284s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563491821s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130829811s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526283264s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130829811s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526283264s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167962074s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563468933s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167945862s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563468933s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131333351s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526870728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130067825s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.525787354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.131213188s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526870728s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130067825s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525787354s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130164146s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.525955200s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130149841s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525955200s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167775154s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563636780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167695045s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563583374s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167775154s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563636780s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167680740s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563583374s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167798996s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563789368s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167782784s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563789368s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128955841s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.525001526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128955841s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525001526s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167663574s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563781738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128925323s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.525077820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167638779s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563781738s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167827606s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563995361s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128909111s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525077820s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128597260s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.524780273s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128580093s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.524780273s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167798042s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563995361s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130509377s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526199341s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130509377s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526199341s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130696297s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.527030945s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.130680084s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527030945s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128415108s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.524864197s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167393684s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563858032s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128393173s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.524864197s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167393684s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563858032s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.127608299s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.524215698s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.127591133s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.524215698s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167205811s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.564064026s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167050362s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563934326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126993179s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523895264s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126993179s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523895264s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167020798s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.563987732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167020798s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563987732s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167158127s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.564064026s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126741409s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523780823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.166915894s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.564025879s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126741409s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523780823s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.166915894s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.564025879s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126445770s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523612976s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126445770s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523612976s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126256943s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523498535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171901703s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569152832s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126240730s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523498535s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171886444s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569152832s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171794891s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569145203s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125863075s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523284912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171885490s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569320679s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171777725s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569145203s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125848770s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523284912s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171869278s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569320679s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.167030334s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563934326s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171674728s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569313049s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125247955s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522903442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171653748s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569313049s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125247955s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522903442s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.129738808s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.526550293s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125171661s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.523025513s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171493530s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569374084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.125171661s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523025513s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171493530s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569374084s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124632835s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522621155s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124501228s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522529602s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124632835s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522621155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124485016s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522529602s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171274185s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.569412231s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171253204s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569412231s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124229431s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522422791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.13( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.124214172s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522422791s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126076698s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.524368286s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.126063347s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.524368286s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171625137s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570022583s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171625137s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570022583s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.123730659s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522224426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.123730659s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522224426s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171486855s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570030212s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171486855s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570030212s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.123384476s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.522041321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171494484s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570159912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.123366356s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522041321s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171479225s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570159912s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171265602s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570137024s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171247482s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570137024s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.122593880s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active pruub 73.521522522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.122593880s) [] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521522522s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171079636s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570182800s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171063423s) [1] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570182800s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171109200s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570251465s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.171109200s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.170665741s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 67.570121765s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=9.170665741s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570121765s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=15.128764153s) [1] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526550293s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:45:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:51 compute-0 sudo[86140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:45:51 compute-0 sudo[86140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:51 compute-0 sudo[86140]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:51 compute-0 python3[86139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:51 compute-0 sudo[86165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:45:51 compute-0 sudo[86165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:51 compute-0 sudo[86165]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.026246893 +0000 UTC m=+0.047667977 container create 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:45:52 compute-0 systemd[1]: Started libpod-conmon-51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb.scope.
Oct 10 09:45:52 compute-0 sudo[86203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.005574443 +0000 UTC m=+0.026995527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c49596e239e63c6a17417ba7d50fedb5195beac56aac94b3ea335c5d769f00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c49596e239e63c6a17417ba7d50fedb5195beac56aac94b3ea335c5d769f00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c49596e239e63c6a17417ba7d50fedb5195beac56aac94b3ea335c5d769f00/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.137989018 +0000 UTC m=+0.159410112 container init 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:52 compute-0 sudo[86233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:52 compute-0 sudo[86233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86233]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.148758158 +0000 UTC m=+0.170179242 container start 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:45:52 compute-0 ceph-mon[73551]: 3.15 deep-scrub starts
Oct 10 09:45:52 compute-0 ceph-mon[73551]: 3.15 deep-scrub ok
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-0 ceph-mon[73551]: osdmap e32: 3 total, 2 up, 3 in
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:52 compute-0 ceph-mon[73551]: 2.2 deep-scrub starts
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:45:52 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mon[73551]: 2.2 deep-scrub ok
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.160964957 +0000 UTC m=+0.182386091 container attach 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:52 compute-0 sudo[86260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86260]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 sudo[86327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86327]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 sudo[86352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86352]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 sudo[86377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 sudo[86377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86377]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 sudo[86402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:45:52 compute-0 sudo[86402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86402]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct 10 09:45:52 compute-0 sudo[86427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:45:52 compute-0 sudo[86427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 10 09:45:52 compute-0 sudo[86427]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2975567301' entity='client.admin' 
Oct 10 09:45:52 compute-0 ecstatic_robinson[86230]: set ssl_option
Oct 10 09:45:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 10 09:45:52 compute-0 systemd[1]: libpod-51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb.scope: Deactivated successfully.
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.66946931 +0000 UTC m=+0.690890414 container died 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c49596e239e63c6a17417ba7d50fedb5195beac56aac94b3ea335c5d769f00-merged.mount: Deactivated successfully.
Oct 10 09:45:52 compute-0 sudo[86454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86454]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 podman[86181]: 2025-10-10 09:45:52.723499145 +0000 UTC m=+0.744920219 container remove 51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb (image=quay.io/ceph/ceph:v19, name=ecstatic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 09:45:52 compute-0 systemd[1]: libpod-conmon-51977afc529ebc8060cfa26007e2b44865167f4bf5a85803c7a6469694d78fdb.scope: Deactivated successfully.
Oct 10 09:45:52 compute-0 sudo[86137]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 sudo[86491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:52 compute-0 sudo[86491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86491]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/269809354; not ready for session (expect reconnect)
Oct 10 09:45:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 10 09:45:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Oct 10 09:45:52 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Oct 10 09:45:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:52 compute-0 sudo[86516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86516]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.19( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 sudo[86572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpjickeoxoyoeiwnojlkoxmqgecmxen ; /usr/bin/python3'
Oct 10 09:45:52 compute-0 sudo[86572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.1( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.6( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.4( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.9( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.1e( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.1f( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 33 pg[2.e( empty local-lis/les=32/33 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32) [0] r=0 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v93: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:52 compute-0 sudo[86590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:52 compute-0 sudo[86590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-0 sudo[86590]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 sudo[86615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:53 compute-0 sudo[86615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-0 sudo[86615]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-0 python3[86588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:53 compute-0 sudo[86640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-0 sudo[86640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-0 sudo[86640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.130193403 +0000 UTC m=+0.041154864 container create 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: purged_snaps scrub starts
Oct 10 09:45:53 compute-0 ceph-mon[73551]: purged_snaps scrub ok
Oct 10 09:45:53 compute-0 ceph-mon[73551]: 4.12 scrub starts
Oct 10 09:45:53 compute-0 ceph-mon[73551]: 4.12 scrub ok
Oct 10 09:45:53 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2975567301' entity='client.admin' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mon[73551]: osdmap e33: 3 total, 2 up, 3 in
Oct 10 09:45:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mon[73551]: 2.3 deep-scrub starts
Oct 10 09:45:53 compute-0 ceph-mon[73551]: 2.3 deep-scrub ok
Oct 10 09:45:53 compute-0 ceph-mon[73551]: pgmap v93: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 systemd[1]: Started libpod-conmon-8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332.scope.
Oct 10 09:45:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c69929f60b933ea283991f3a5743d84278b93c105fee1b70fcdb2c82d7be949/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c69929f60b933ea283991f3a5743d84278b93c105fee1b70fcdb2c82d7be949/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c69929f60b933ea283991f3a5743d84278b93c105fee1b70fcdb2c82d7be949/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.111848413 +0000 UTC m=+0.022809894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.222407058 +0000 UTC m=+0.133368539 container init 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.22973783 +0000 UTC m=+0.140699301 container start 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.23790928 +0000 UTC m=+0.148870741 container attach 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:53 compute-0 sudo[86703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:53 compute-0 sudo[86703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-0 sudo[86703]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-0 sudo[86728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:45:53 compute-0 sudo[86728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-0 beautiful_dijkstra[86680]: Scheduled rgw.rgw update...
Oct 10 09:45:53 compute-0 beautiful_dijkstra[86680]: Scheduled ingress.rgw.default update...
Oct 10 09:45:53 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 10 09:45:53 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 10 09:45:53 compute-0 systemd[1]: libpod-8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332.scope: Deactivated successfully.
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.664183211 +0000 UTC m=+0.575144702 container died 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c69929f60b933ea283991f3a5743d84278b93c105fee1b70fcdb2c82d7be949-merged.mount: Deactivated successfully.
Oct 10 09:45:53 compute-0 podman[86641]: 2025-10-10 09:45:53.726277452 +0000 UTC m=+0.637238953 container remove 8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332 (image=quay.io/ceph/ceph:v19, name=beautiful_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:53 compute-0 systemd[1]: libpod-conmon-8e163ad71a20e06b25f8a4633d48efcefba62acb3e228bfa4f2e84a0b7ee8332.scope: Deactivated successfully.
Oct 10 09:45:53 compute-0 sudo[86572]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/269809354; not ready for session (expect reconnect)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:53 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:53 compute-0 podman[86829]: 2025-10-10 09:45:53.981644277 +0000 UTC m=+0.038599206 container create 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:54 compute-0 systemd[1]: Started libpod-conmon-3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b.scope.
Oct 10 09:45:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:53.965410439 +0000 UTC m=+0.022365398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:54.068787497 +0000 UTC m=+0.125742456 container init 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:54.076824554 +0000 UTC m=+0.133779473 container start 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:54 compute-0 mystifying_swirles[86874]: 167 167
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:54.081496354 +0000 UTC m=+0.138451283 container attach 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:45:54 compute-0 systemd[1]: libpod-3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b.scope: Deactivated successfully.
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:54.081995161 +0000 UTC m=+0.138950090 container died 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-27549e3dd3ce1344a1b546f876aa4cf9690658e6023bd42ef66425b8d4b28a43-merged.mount: Deactivated successfully.
Oct 10 09:45:54 compute-0 podman[86829]: 2025-10-10 09:45:54.118028938 +0000 UTC m=+0.174983867 container remove 3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_swirles, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:54 compute-0 systemd[1]: libpod-conmon-3d18bdf73875e7207142e49820c2d97258b61e6010c925484b14745a2734193b.scope: Deactivated successfully.
Oct 10 09:45:54 compute-0 ceph-mon[73551]: 3.1f scrub starts
Oct 10 09:45:54 compute-0 ceph-mon[73551]: 3.1f scrub ok
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mon[73551]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: Saving service ingress.rgw.default spec with placement count:2
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mon[73551]: 2.11 scrub starts
Oct 10 09:45:54 compute-0 ceph-mon[73551]: 2.11 scrub ok
Oct 10 09:45:54 compute-0 python3[86909]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.279933495 +0000 UTC m=+0.052170562 container create 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:54 compute-0 systemd[1]: Started libpod-conmon-3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371.scope.
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.260051412 +0000 UTC m=+0.032288509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.372037996 +0000 UTC m=+0.144275153 container init 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.382398151 +0000 UTC m=+0.154635218 container start 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.386538474 +0000 UTC m=+0.158775581 container attach 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 09:45:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 10 09:45:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 10 09:45:54 compute-0 python3[87014]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089553.9313014-33883-149044280078423/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:45:54 compute-0 determined_austin[86957]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:45:54 compute-0 determined_austin[86957]: --> All data devices are unavailable
Oct 10 09:45:54 compute-0 systemd[1]: libpod-3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371.scope: Deactivated successfully.
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.750720653 +0000 UTC m=+0.522957720 container died 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1c3554e82a404eb7f5cb6bdedabbf8448fa389a6b4141c4413c43f3770371ad-merged.mount: Deactivated successfully.
Oct 10 09:45:54 compute-0 podman[86922]: 2025-10-10 09:45:54.792882771 +0000 UTC m=+0.565119848 container remove 3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:45:54 compute-0 ceph-mgr[73845]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/269809354; not ready for session (expect reconnect)
Oct 10 09:45:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:54 compute-0 ceph-mgr[73845]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 09:45:54 compute-0 systemd[1]: libpod-conmon-3cc719548c295c835a35c66c2e77add4ad26c758dd33e98542d2def3be647371.scope: Deactivated successfully.
Oct 10 09:45:54 compute-0 sudo[86728]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:54 compute-0 sudo[87059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:54 compute-0 sudo[87059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:54 compute-0 sudo[87059]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v94: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:54 compute-0 sudo[87084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:45:54 compute-0 sudo[87084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:55 compute-0 sudo[87132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfqxcsbxdrwyygwwypdtajhrrqpeuzbv ; /usr/bin/python3'
Oct 10 09:45:55 compute-0 sudo[87132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 10 09:45:55 compute-0 ceph-mon[73551]: 5.19 scrub starts
Oct 10 09:45:55 compute-0 ceph-mon[73551]: 5.19 scrub ok
Oct 10 09:45:55 compute-0 ceph-mon[73551]: OSD bench result of 9119.333889 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:55 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:55 compute-0 ceph-mon[73551]: 2.14 scrub starts
Oct 10 09:45:55 compute-0 ceph-mon[73551]: 2.14 scrub ok
Oct 10 09:45:55 compute-0 ceph-mon[73551]: pgmap v94: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354] boot
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733479500s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528808594s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733442307s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528808594s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733419418s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528816223s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767619610s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563049316s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767585278s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563049316s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.732030869s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527626038s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.732094765s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527717590s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.732013702s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527626038s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.732082367s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527717590s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767382622s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563163757s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.731618881s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527427673s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767365456s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563163757s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.731575966s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527427673s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.731600761s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527526855s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733878136s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.529815674s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733865738s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.529815674s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.730920792s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526962280s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767304420s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.731546402s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.527526855s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767292023s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.730899811s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526962280s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.733366966s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.528816223s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.730782509s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526931763s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.730766296s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526931763s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767043591s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729925156s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526283264s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767013550s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563354492s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767111301s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563476562s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729906082s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526283264s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767096043s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563476562s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729727745s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526199341s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729701042s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.526199341s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729254723s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525787354s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.729243279s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525787354s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767035961s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563636780s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=27/28 n=0 ec=19/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767016411s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563636780s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.728350639s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525001526s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.0( empty local-lis/les=25/26 n=0 ec=17/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.728337288s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.525001526s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767105579s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563858032s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.d( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767084599s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563858032s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.727018356s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523895264s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.726897240s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523780823s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767092228s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563987732s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767117023s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.564025879s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.b( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767077446s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563987732s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.726873398s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523780823s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.726967812s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523895264s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.8( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767101765s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.564025879s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.726563454s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523612976s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725790977s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522903442s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725769043s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522903442s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.726542473s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523612976s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725722313s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523025513s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725274086s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522621155s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772023201s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569374084s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725686073s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.523025513s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772640705s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570022583s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.725247383s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522621155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772004128s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.569374084s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.12( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772616863s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570022583s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772523403s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570121765s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.724596024s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522224426s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772506237s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570121765s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772393703s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570030212s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.724577904s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.522224426s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[5.13( empty local-lis/les=27/28 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772377491s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570030212s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.723770142s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521522522s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=34 pruub=11.723753929s) [2] r=-1 lpr=34 pi=[25,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.521522522s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772402763s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570251465s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.772390366s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.570251465s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.767548561s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563003540s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=34 pruub=5.765058041s) [2] r=-1 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.563003540s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:55 compute-0 python3[87134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.37646811 +0000 UTC m=+0.051515989 container create 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 09:45:55 compute-0 systemd[1]: Started libpod-conmon-7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b.scope.
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.428263848 +0000 UTC m=+0.045203792 container create 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.356819395 +0000 UTC m=+0.031867304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7475108a2ec73311004c34c6b1105048e6cf0f07d71fc94dfed2c1af3d7452b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7475108a2ec73311004c34c6b1105048e6cf0f07d71fc94dfed2c1af3d7452b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7475108a2ec73311004c34c6b1105048e6cf0f07d71fc94dfed2c1af3d7452b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 systemd[1]: Started libpod-conmon-76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268.scope.
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.463771487 +0000 UTC m=+0.138819356 container init 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.470947923 +0000 UTC m=+0.145995782 container start 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.473826552 +0000 UTC m=+0.148874411 container attach 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.499597836 +0000 UTC m=+0.116537770 container init 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.407440083 +0000 UTC m=+0.024380047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.505422436 +0000 UTC m=+0.122362370 container start 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.508133289 +0000 UTC m=+0.125073233 container attach 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 09:45:55 compute-0 busy_wilson[87208]: 167 167
Oct 10 09:45:55 compute-0 systemd[1]: libpod-76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268.scope: Deactivated successfully.
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.510313775 +0000 UTC m=+0.127253699 container died 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dd3d436c08c703acf91d161806fd793c64a6d50b1583fbad9d6baf780958754-merged.mount: Deactivated successfully.
Oct 10 09:45:55 compute-0 podman[87187]: 2025-10-10 09:45:55.538245153 +0000 UTC m=+0.155185087 container remove 76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:45:55 compute-0 systemd[1]: libpod-conmon-76895121059cfcd7d448fa891a6acc0c5079c667c0c5538ca7b756b7e02af268.scope: Deactivated successfully.
Oct 10 09:45:55 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 10 09:45:55 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 10 09:45:55 compute-0 podman[87252]: 2025-10-10 09:45:55.70161749 +0000 UTC m=+0.041467034 container create 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:45:55 compute-0 systemd[1]: Started libpod-conmon-2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9.scope.
Oct 10 09:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0379e7590da091335fcc038245e7f31a06e6cf301991f2aa2283cd8fbcc86f4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0379e7590da091335fcc038245e7f31a06e6cf301991f2aa2283cd8fbcc86f4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0379e7590da091335fcc038245e7f31a06e6cf301991f2aa2283cd8fbcc86f4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0379e7590da091335fcc038245e7f31a06e6cf301991f2aa2283cd8fbcc86f4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:55 compute-0 podman[87252]: 2025-10-10 09:45:55.685498437 +0000 UTC m=+0.025348001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:55 compute-0 podman[87252]: 2025-10-10 09:45:55.783070026 +0000 UTC m=+0.122919600 container init 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:45:55 compute-0 podman[87252]: 2025-10-10 09:45:55.790610194 +0000 UTC m=+0.130459738 container start 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:45:55 compute-0 podman[87252]: 2025-10-10 09:45:55.793569366 +0000 UTC m=+0.133418930 container attach 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct 10 09:45:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 10 09:45:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:55 compute-0 eloquent_edison[87203]: Scheduled node-exporter update...
Oct 10 09:45:55 compute-0 eloquent_edison[87203]: Scheduled grafana update...
Oct 10 09:45:55 compute-0 eloquent_edison[87203]: Scheduled prometheus update...
Oct 10 09:45:55 compute-0 eloquent_edison[87203]: Scheduled alertmanager update...
Oct 10 09:45:55 compute-0 systemd[1]: libpod-7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b.scope: Deactivated successfully.
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.887171639 +0000 UTC m=+0.562219508 container died 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 10 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7475108a2ec73311004c34c6b1105048e6cf0f07d71fc94dfed2c1af3d7452b-merged.mount: Deactivated successfully.
Oct 10 09:45:55 compute-0 podman[87161]: 2025-10-10 09:45:55.928150465 +0000 UTC m=+0.603198324 container remove 7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b (image=quay.io/ceph/ceph:v19, name=eloquent_edison, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 09:45:55 compute-0 systemd[1]: libpod-conmon-7849ab92500266a302ab8e9d0d314e7d5078a7238cc8620c22cd0f1b7419515b.scope: Deactivated successfully.
Oct 10 09:45:55 compute-0 sudo[87132]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:56 compute-0 sad_diffie[87269]: {
Oct 10 09:45:56 compute-0 sad_diffie[87269]:     "0": [
Oct 10 09:45:56 compute-0 sad_diffie[87269]:         {
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "devices": [
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "/dev/loop3"
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             ],
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "lv_name": "ceph_lv0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "lv_size": "21470642176",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "name": "ceph_lv0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "tags": {
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.cluster_name": "ceph",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.crush_device_class": "",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.encrypted": "0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.osd_id": "0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.type": "block",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.vdo": "0",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:                 "ceph.with_tpm": "0"
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             },
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "type": "block",
Oct 10 09:45:56 compute-0 sad_diffie[87269]:             "vg_name": "ceph_vg0"
Oct 10 09:45:56 compute-0 sad_diffie[87269]:         }
Oct 10 09:45:56 compute-0 sad_diffie[87269]:     ]
Oct 10 09:45:56 compute-0 sad_diffie[87269]: }
Oct 10 09:45:56 compute-0 systemd[1]: libpod-2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9.scope: Deactivated successfully.
Oct 10 09:45:56 compute-0 podman[87252]: 2025-10-10 09:45:56.123631984 +0000 UTC m=+0.463481568 container died 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:56 compute-0 podman[87252]: 2025-10-10 09:45:56.17946478 +0000 UTC m=+0.519314324 container remove 2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 09:45:56 compute-0 systemd[1]: libpod-conmon-2113ed6076f7c15ca793377c72c3c5ba0d5be0fe693bfed42d18e3049ab59ce9.scope: Deactivated successfully.
Oct 10 09:45:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 10 09:45:56 compute-0 ceph-mon[73551]: 3.1e scrub starts
Oct 10 09:45:56 compute-0 ceph-mon[73551]: 3.1e scrub ok
Oct 10 09:45:56 compute-0 ceph-mon[73551]: osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354] boot
Oct 10 09:45:56 compute-0 ceph-mon[73551]: osdmap e34: 3 total, 3 up, 3 in
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:56 compute-0 ceph-mon[73551]: Saving service node-exporter spec with placement *
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-0 ceph-mon[73551]: Saving service grafana spec with placement compute-0;count:1
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-0 ceph-mon[73551]: Saving service prometheus spec with placement compute-0;count:1
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-0 ceph-mon[73551]: Saving service alertmanager spec with placement compute-0;count:1
Oct 10 09:45:56 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-0 ceph-mon[73551]: 2.16 scrub starts
Oct 10 09:45:56 compute-0 ceph-mon[73551]: 2.16 scrub ok
Oct 10 09:45:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 10 09:45:56 compute-0 sudo[87084]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 10 09:45:56 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 11 completed events
Oct 10 09:45:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:45:56 compute-0 sudo[87349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnkqgaexohfbneglobbzjudzkvelmeu ; /usr/bin/python3'
Oct 10 09:45:56 compute-0 sudo[87349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-0 sudo[87308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:56 compute-0 sudo[87308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:56 compute-0 sudo[87308]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0379e7590da091335fcc038245e7f31a06e6cf301991f2aa2283cd8fbcc86f4b-merged.mount: Deactivated successfully.
Oct 10 09:45:56 compute-0 sudo[87356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:45:56 compute-0 sudo[87356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:56 compute-0 python3[87353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:56 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 10 09:45:56 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 10 09:45:56 compute-0 podman[87381]: 2025-10-10 09:45:56.571712693 +0000 UTC m=+0.050807774 container create 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:45:56 compute-0 systemd[1]: Started libpod-conmon-1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f.scope.
Oct 10 09:45:56 compute-0 podman[87381]: 2025-10-10 09:45:56.55239531 +0000 UTC m=+0.031490401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cfc872edca432401d4aa3c6dd85136dcba20693ab398801ffa52663dda2df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cfc872edca432401d4aa3c6dd85136dcba20693ab398801ffa52663dda2df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cfc872edca432401d4aa3c6dd85136dcba20693ab398801ffa52663dda2df8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:56 compute-0 podman[87381]: 2025-10-10 09:45:56.685560652 +0000 UTC m=+0.164655753 container init 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:56 compute-0 podman[87381]: 2025-10-10 09:45:56.696424344 +0000 UTC m=+0.175519425 container start 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:56 compute-0 podman[87381]: 2025-10-10 09:45:56.704359427 +0000 UTC m=+0.183454518 container attach 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.83586616 +0000 UTC m=+0.039444335 container create c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:56 compute-0 systemd[1]: Started libpod-conmon-c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933.scope.
Oct 10 09:45:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.90230662 +0000 UTC m=+0.105884835 container init c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.907984675 +0000 UTC m=+0.111562840 container start c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:45:56 compute-0 zen_maxwell[87474]: 167 167
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.91191822 +0000 UTC m=+0.115496385 container attach c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:56 compute-0 systemd[1]: libpod-c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933.scope: Deactivated successfully.
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.912533712 +0000 UTC m=+0.116111887 container died c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.819096435 +0000 UTC m=+0.022674640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86f97bd5a7559022157245a919527c7bedcc19f2ba9ca7c17606758f3638498-merged.mount: Deactivated successfully.
Oct 10 09:45:56 compute-0 podman[87438]: 2025-10-10 09:45:56.948152734 +0000 UTC m=+0.151730909 container remove c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_maxwell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:45:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v97: 162 pgs: 80 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:56 compute-0 systemd[1]: libpod-conmon-c7fffb15f4887be55b5ae316c362ac512d7a405bfae2bceca538bcecd28cd933.scope: Deactivated successfully.
Oct 10 09:45:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct 10 09:45:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2898111592' entity='client.admin' 
Oct 10 09:45:57 compute-0 systemd[1]: libpod-1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f.scope: Deactivated successfully.
Oct 10 09:45:57 compute-0 podman[87381]: 2025-10-10 09:45:57.084219474 +0000 UTC m=+0.563314555 container died 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-24cfc872edca432401d4aa3c6dd85136dcba20693ab398801ffa52663dda2df8-merged.mount: Deactivated successfully.
Oct 10 09:45:57 compute-0 podman[87381]: 2025-10-10 09:45:57.126004218 +0000 UTC m=+0.605099289 container remove 1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.13682825 +0000 UTC m=+0.070486100 container create ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:45:57 compute-0 sudo[87349]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:57 compute-0 systemd[1]: libpod-conmon-1cd47237277db0f03e331312c888645990b55c4139b3a6ebb84ecc8bbc21f10f.scope: Deactivated successfully.
Oct 10 09:45:57 compute-0 systemd[1]: Started libpod-conmon-ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4.scope.
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.104474499 +0000 UTC m=+0.038132359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbae28eaaa10ded9863dfeeb5854d580f0c1589e9b8e9523036e16eb4c87d5b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbae28eaaa10ded9863dfeeb5854d580f0c1589e9b8e9523036e16eb4c87d5b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbae28eaaa10ded9863dfeeb5854d580f0c1589e9b8e9523036e16eb4c87d5b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbae28eaaa10ded9863dfeeb5854d580f0c1589e9b8e9523036e16eb4c87d5b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.225081749 +0000 UTC m=+0.158739629 container init ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.233185037 +0000 UTC m=+0.166842897 container start ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.237386422 +0000 UTC m=+0.171044272 container attach ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:45:57 compute-0 ceph-mon[73551]: 6.18 scrub starts
Oct 10 09:45:57 compute-0 ceph-mon[73551]: 6.18 scrub ok
Oct 10 09:45:57 compute-0 ceph-mon[73551]: osdmap e35: 3 total, 3 up, 3 in
Oct 10 09:45:57 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:57 compute-0 ceph-mon[73551]: 2.17 deep-scrub starts
Oct 10 09:45:57 compute-0 ceph-mon[73551]: 2.17 deep-scrub ok
Oct 10 09:45:57 compute-0 ceph-mon[73551]: pgmap v97: 162 pgs: 80 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2898111592' entity='client.admin' 
Oct 10 09:45:57 compute-0 sudo[87554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfryuaeudzzfzieauxyhfgclfwzjmnsw ; /usr/bin/python3'
Oct 10 09:45:57 compute-0 sudo[87554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:57 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 10 09:45:57 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 10 09:45:57 compute-0 python3[87556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:57 compute-0 podman[87577]: 2025-10-10 09:45:57.638527639 +0000 UTC m=+0.059857715 container create 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:57 compute-0 systemd[1]: Started libpod-conmon-3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8.scope.
Oct 10 09:45:57 compute-0 podman[87577]: 2025-10-10 09:45:57.610410834 +0000 UTC m=+0.031740930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd1cfd4a5ad5cffae1272ec909ae162ee97809ddffdc3aade09af7c8ae53f1f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd1cfd4a5ad5cffae1272ec909ae162ee97809ddffdc3aade09af7c8ae53f1f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd1cfd4a5ad5cffae1272ec909ae162ee97809ddffdc3aade09af7c8ae53f1f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:57 compute-0 podman[87577]: 2025-10-10 09:45:57.730411772 +0000 UTC m=+0.151741858 container init 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:57 compute-0 podman[87577]: 2025-10-10 09:45:57.737244697 +0000 UTC m=+0.158574773 container start 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:45:57 compute-0 podman[87577]: 2025-10-10 09:45:57.740341684 +0000 UTC m=+0.161671770 container attach 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:57 compute-0 lvm[87664]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:45:57 compute-0 lvm[87664]: VG ceph_vg0 finished
Oct 10 09:45:57 compute-0 gracious_fermat[87526]: {}
Oct 10 09:45:57 compute-0 systemd[1]: libpod-ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4.scope: Deactivated successfully.
Oct 10 09:45:57 compute-0 podman[87498]: 2025-10-10 09:45:57.959252397 +0000 UTC m=+0.892910287 container died ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:57 compute-0 systemd[1]: libpod-ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4.scope: Consumed 1.202s CPU time.
Oct 10 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbae28eaaa10ded9863dfeeb5854d580f0c1589e9b8e9523036e16eb4c87d5b6-merged.mount: Deactivated successfully.
Oct 10 09:45:58 compute-0 podman[87498]: 2025-10-10 09:45:58.016822183 +0000 UTC m=+0.950480033 container remove ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:58 compute-0 systemd[1]: libpod-conmon-ac66874d355db8a233a635116462d40094e52fbdac7785a269c76d2c01d21bc4.scope: Deactivated successfully.
Oct 10 09:45:58 compute-0 sudo[87356]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1237849469' entity='client.admin' 
Oct 10 09:45:58 compute-0 systemd[1]: libpod-3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8.scope: Deactivated successfully.
Oct 10 09:45:58 compute-0 podman[87577]: 2025-10-10 09:45:58.164218863 +0000 UTC m=+0.585548969 container died 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:45:58 compute-0 sudo[87679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:45:58 compute-0 sudo[87679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:58 compute-0 sudo[87679]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-afd1cfd4a5ad5cffae1272ec909ae162ee97809ddffdc3aade09af7c8ae53f1f-merged.mount: Deactivated successfully.
Oct 10 09:45:58 compute-0 podman[87577]: 2025-10-10 09:45:58.212093895 +0000 UTC m=+0.633423971 container remove 3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8 (image=quay.io/ceph/ceph:v19, name=interesting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 09:45:58 compute-0 systemd[1]: libpod-conmon-3cdd6ef8a9224f4c0fc364263494b35a9c0df6d3acf6b6eacf3929c29ca497f8.scope: Deactivated successfully.
Oct 10 09:45:58 compute-0 sudo[87554]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 5.1d scrub starts
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 5.1d scrub ok
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 6.17 scrub starts
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 6.17 scrub ok
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 2.1a scrub starts
Oct 10 09:45:58 compute-0 ceph-mon[73551]: 2.1a scrub ok
Oct 10 09:45:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1237849469' entity='client.admin' 
Oct 10 09:45:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct 10 09:45:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:45:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:58 compute-0 sudo[87724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:58 compute-0 sudo[87724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:58 compute-0 sudo[87724]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-0 sudo[87767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvgwbtqkfumzszbsvtswghyknovwrib ; /usr/bin/python3'
Oct 10 09:45:58 compute-0 sudo[87767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:58 compute-0 sudo[87772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:58 compute-0 sudo[87772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:58 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 10 09:45:58 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 10 09:45:58 compute-0 python3[87771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:58 compute-0 podman[87797]: 2025-10-10 09:45:58.655464323 +0000 UTC m=+0.068444190 container create 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:58 compute-0 systemd[1]: Started libpod-conmon-1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc.scope.
Oct 10 09:45:58 compute-0 podman[87797]: 2025-10-10 09:45:58.628767657 +0000 UTC m=+0.041747644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0a561581559cb816b964fd4a06aa8bed9858974548b6a99ac0a0317278ed1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0a561581559cb816b964fd4a06aa8bed9858974548b6a99ac0a0317278ed1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0a561581559cb816b964fd4a06aa8bed9858974548b6a99ac0a0317278ed1d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:58 compute-0 podman[87797]: 2025-10-10 09:45:58.747861034 +0000 UTC m=+0.160840991 container init 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 09:45:58 compute-0 podman[87797]: 2025-10-10 09:45:58.754272984 +0000 UTC m=+0.167252851 container start 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:58 compute-0 podman[87797]: 2025-10-10 09:45:58.758052353 +0000 UTC m=+0.171032230 container attach 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.795128306 +0000 UTC m=+0.045647838 container create 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:58 compute-0 systemd[1]: Started libpod-conmon-1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c.scope.
Oct 10 09:45:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.773612318 +0000 UTC m=+0.024131850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.876403086 +0000 UTC m=+0.126922638 container init 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.883954295 +0000 UTC m=+0.134473807 container start 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.887623731 +0000 UTC m=+0.138143333 container attach 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:58 compute-0 systemd[1]: libpod-1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c.scope: Deactivated successfully.
Oct 10 09:45:58 compute-0 confident_visvesvaraya[87847]: 167 167
Oct 10 09:45:58 compute-0 conmon[87847]: conmon 1ee17182112a68b8a721 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c.scope/container/memory.events
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.890931205 +0000 UTC m=+0.141450727 container died 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bd8eaca79591dc2f8b359ec4abbf17eb702331cbb1265e60533d8d8bb8c732b-merged.mount: Deactivated successfully.
Oct 10 09:45:58 compute-0 podman[87829]: 2025-10-10 09:45:58.931170886 +0000 UTC m=+0.181690408 container remove 1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c (image=quay.io/ceph/ceph:v19, name=confident_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:45:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v98: 162 pgs: 36 peering, 126 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:58 compute-0 systemd[1]: libpod-conmon-1ee17182112a68b8a721c6a6b15524957893f1e10d03ecae0fabf674a14e906c.scope: Deactivated successfully.
Oct 10 09:45:58 compute-0 sudo[87772]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.xkdepb (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.xkdepb (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:45:59 compute-0 sudo[87884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:59 compute-0 sudo[87884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:59 compute-0 sudo[87884]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct 10 09:45:59 compute-0 sudo[87909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3410162506' entity='client.admin' 
Oct 10 09:45:59 compute-0 sudo[87909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:59 compute-0 systemd[1]: libpod-1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc.scope: Deactivated successfully.
Oct 10 09:45:59 compute-0 podman[87797]: 2025-10-10 09:45:59.182427309 +0000 UTC m=+0.595407216 container died 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e0a561581559cb816b964fd4a06aa8bed9858974548b6a99ac0a0317278ed1d-merged.mount: Deactivated successfully.
Oct 10 09:45:59 compute-0 podman[87797]: 2025-10-10 09:45:59.223449447 +0000 UTC m=+0.636429314 container remove 1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc (image=quay.io/ceph/ceph:v19, name=hardcore_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 10 09:45:59 compute-0 systemd[1]: libpod-conmon-1eff380ff6cf05b7661481ff350c1b64b780a2a570e8f7ae1fac6efba99dbbbc.scope: Deactivated successfully.
Oct 10 09:45:59 compute-0 sudo[87767]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 6.1f scrub starts
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 6.1f scrub ok
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 4.14 scrub starts
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 4.14 scrub ok
Oct 10 09:45:59 compute-0 ceph-mon[73551]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 5.1f scrub starts
Oct 10 09:45:59 compute-0 ceph-mon[73551]: 5.1f scrub ok
Oct 10 09:45:59 compute-0 ceph-mon[73551]: pgmap v98: 162 pgs: 36 peering, 126 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mon[73551]: Reconfiguring mgr.compute-0.xkdepb (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:45:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3410162506' entity='client.admin' 
Oct 10 09:45:59 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.496379095 +0000 UTC m=+0.047171890 container create ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:45:59 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 10 09:45:59 compute-0 systemd[1]: Started libpod-conmon-ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb.scope.
Oct 10 09:45:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.476264125 +0000 UTC m=+0.027056930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.575937895 +0000 UTC m=+0.126730700 container init ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.582600804 +0000 UTC m=+0.133393609 container start ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.586604631 +0000 UTC m=+0.137397426 container attach ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:59 compute-0 elastic_lamarr[87981]: 167 167
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.588510377 +0000 UTC m=+0.139303172 container died ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:59 compute-0 systemd[1]: libpod-ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb.scope: Deactivated successfully.
Oct 10 09:45:59 compute-0 podman[87965]: 2025-10-10 09:45:59.627111072 +0000 UTC m=+0.177903857 container remove ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb (image=quay.io/ceph/ceph:v19, name=elastic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:59 compute-0 systemd[1]: libpod-conmon-ef55c2db8464c796441eec3592039f4ff04edc9940eb684b6c2b4b1c8ad4c6fb.scope: Deactivated successfully.
Oct 10 09:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-33b38fb66747231c28268682d9a45bdd08cc411e9ae56de54a36bed596e3cfe5-merged.mount: Deactivated successfully.
Oct 10 09:45:59 compute-0 sudo[87909]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:59 compute-0 sudo[88021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunsgqoejljopitkcefrnhjlnzsghkow ; /usr/bin/python3'
Oct 10 09:45:59 compute-0 sudo[88021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:45:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct 10 09:45:59 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct 10 09:45:59 compute-0 sudo[88024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:59 compute-0 sudo[88024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:59 compute-0 sudo[88024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:59 compute-0 python3[88023]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:45:59 compute-0 sudo[88049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:59 compute-0 sudo[88049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:59 compute-0 sudo[88021]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.158080436 +0000 UTC m=+0.045677498 container create 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:00 compute-0 systemd[1]: Started libpod-conmon-8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2.scope.
Oct 10 09:46:00 compute-0 sudo[88138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmnfejvxfvbasgyvetikqpynnrevzun ; /usr/bin/python3'
Oct 10 09:46:00 compute-0 sudo[88138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.138160522 +0000 UTC m=+0.025757614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.235894907 +0000 UTC m=+0.123492049 container init 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.243747117 +0000 UTC m=+0.131344189 container start 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.247389451 +0000 UTC m=+0.134986563 container attach 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:00 compute-0 vigorous_roentgen[88143]: 167 167
Oct 10 09:46:00 compute-0 systemd[1]: libpod-8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2.scope: Deactivated successfully.
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.249511854 +0000 UTC m=+0.137108966 container died 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 09:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8003b60b872afbb7884d07f60e18751d36fdaad882656f421f8543343f3ed27-merged.mount: Deactivated successfully.
Oct 10 09:46:00 compute-0 podman[88101]: 2025-10-10 09:46:00.289767625 +0000 UTC m=+0.177364727 container remove 8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:46:00 compute-0 systemd[1]: libpod-conmon-8cc2a842d79d7f5d2383aec35b4a7f84629e5687f93a81015119a1bd63e35dd2.scope: Deactivated successfully.
Oct 10 09:46:00 compute-0 python3[88145]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.xkdepb/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:00 compute-0 sudo[88049]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 6.c scrub starts
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 6.c scrub ok
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 3.e scrub starts
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 3.e scrub ok
Oct 10 09:46:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-0 ceph-mon[73551]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 10 09:46:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:46:00 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:00 compute-0 ceph-mon[73551]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 5.10 scrub starts
Oct 10 09:46:00 compute-0 ceph-mon[73551]: 5.10 scrub ok
Oct 10 09:46:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct 10 09:46:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct 10 09:46:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 10 09:46:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:46:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Oct 10 09:46:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.442190427 +0000 UTC m=+0.065372275 container create 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:46:00 compute-0 systemd[1]: Started libpod-conmon-500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b.scope.
Oct 10 09:46:00 compute-0 sudo[88172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:00 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 10 09:46:00 compute-0 sudo[88172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:00 compute-0 sudo[88172]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:00 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 10 09:46:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77e6e545893c5952c346c94231b39c37337b76f1e883a508251fd2a9e843b32/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77e6e545893c5952c346c94231b39c37337b76f1e883a508251fd2a9e843b32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77e6e545893c5952c346c94231b39c37337b76f1e883a508251fd2a9e843b32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.422147519 +0000 UTC m=+0.045329467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.531396519 +0000 UTC m=+0.154578397 container init 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.541514516 +0000 UTC m=+0.164696364 container start 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.545069828 +0000 UTC m=+0.168251866 container attach 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:46:00 compute-0 sudo[88205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:00 compute-0 sudo[88205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.879685243 +0000 UTC m=+0.052845005 container create a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:00 compute-0 systemd[1]: Started libpod-conmon-a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade.scope.
Oct 10 09:46:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.xkdepb/server_addr}] v 0)
Oct 10 09:46:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2517476288' entity='client.admin' 
Oct 10 09:46:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.950066708 +0000 UTC m=+0.123226480 container init a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.853753663 +0000 UTC m=+0.026913435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:00 compute-0 podman[88161]: 2025-10-10 09:46:00.953195646 +0000 UTC m=+0.576377494 container died 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:46:00 compute-0 systemd[1]: libpod-500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b.scope: Deactivated successfully.
Oct 10 09:46:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v99: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.959064927 +0000 UTC m=+0.132224699 container start a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.961998838 +0000 UTC m=+0.135158600 container attach a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:00 compute-0 distracted_yalow[88281]: 167 167
Oct 10 09:46:00 compute-0 systemd[1]: libpod-a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade.scope: Deactivated successfully.
Oct 10 09:46:00 compute-0 podman[88265]: 2025-10-10 09:46:00.965791849 +0000 UTC m=+0.138951601 container died a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Oct 10 09:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a77e6e545893c5952c346c94231b39c37337b76f1e883a508251fd2a9e843b32-merged.mount: Deactivated successfully.
Oct 10 09:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-58e100ada07d368165d50007dfc89786208964d2818090c55a1bdd0d92add349-merged.mount: Deactivated successfully.
Oct 10 09:46:01 compute-0 podman[88161]: 2025-10-10 09:46:01.005981427 +0000 UTC m=+0.629163275 container remove 500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b (image=quay.io/ceph/ceph:v19, name=eager_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:01 compute-0 systemd[1]: libpod-conmon-500a477736aefb1586acca32b8ed5a487a0ee88445ec710b3c2479677daf8d4b.scope: Deactivated successfully.
Oct 10 09:46:01 compute-0 sudo[88138]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:01 compute-0 podman[88265]: 2025-10-10 09:46:01.027548858 +0000 UTC m=+0.200708610 container remove a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_yalow, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:46:01 compute-0 systemd[1]: libpod-conmon-a364cb47d9f8ad9a615e9147b295ce65d0ea9926810399de6e564d7157d61ade.scope: Deactivated successfully.
Oct 10 09:46:01 compute-0 sudo[88205]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 4.f scrub starts
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 4.f scrub ok
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 5.8 scrub starts
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 5.8 scrub ok
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: Reconfiguring osd.0 (monmap changed)...
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mon[73551]: Reconfiguring daemon osd.0 on compute-0
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 5.11 scrub starts
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2517476288' entity='client.admin' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: 5.11 scrub ok
Oct 10 09:46:01 compute-0 ceph-mon[73551]: pgmap v99: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 10 09:46:01 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 10 09:46:01 compute-0 sudo[88343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyajljgxsaarmxkxyhbsykrnbsvxgjbv ; /usr/bin/python3'
Oct 10 09:46:01 compute-0 sudo[88343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Oct 10 09:46:01 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Oct 10 09:46:02 compute-0 python3[88345]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.rfugxc/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.150813682 +0000 UTC m=+0.071523147 container create d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Oct 10 09:46:02 compute-0 systemd[1]: Started libpod-conmon-d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2.scope.
Oct 10 09:46:02 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.12221047 +0000 UTC m=+0.042919965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65bd97b99120f3c90e37883d1f03408361e916d7359e189a97d2fea3e75a1542/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65bd97b99120f3c90e37883d1f03408361e916d7359e189a97d2fea3e75a1542/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65bd97b99120f3c90e37883d1f03408361e916d7359e189a97d2fea3e75a1542/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.240682045 +0000 UTC m=+0.161391500 container init d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.247015943 +0000 UTC m=+0.167725398 container start d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.250353528 +0000 UTC m=+0.171062983 container attach d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 3.4 scrub starts
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 3.4 scrub ok
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 3.11 scrub starts
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 3.11 scrub ok
Oct 10 09:46:02 compute-0 ceph-mon[73551]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 10 09:46:02 compute-0 ceph-mon[73551]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 10 09:46:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 5.15 scrub starts
Oct 10 09:46:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-0 ceph-mon[73551]: Reconfiguring osd.1 (monmap changed)...
Oct 10 09:46:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:46:02 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:02 compute-0 ceph-mon[73551]: Reconfiguring daemon osd.1 on compute-1
Oct 10 09:46:02 compute-0 ceph-mon[73551]: 5.15 scrub ok
Oct 10 09:46:02 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 10 09:46:02 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.rfugxc/server_addr}] v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' 
Oct 10 09:46:02 compute-0 systemd[1]: libpod-d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2.scope: Deactivated successfully.
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.676863727 +0000 UTC m=+0.597573222 container died d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Oct 10 09:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-65bd97b99120f3c90e37883d1f03408361e916d7359e189a97d2fea3e75a1542-merged.mount: Deactivated successfully.
Oct 10 09:46:02 compute-0 podman[88346]: 2025-10-10 09:46:02.721184438 +0000 UTC m=+0.641893883 container remove d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2 (image=quay.io/ceph/ceph:v19, name=tender_albattani, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:46:02 compute-0 systemd[1]: libpod-conmon-d1f7bb252ab6a2c51054da9b1c547136e487dd4fd09a9c1f6b02d33774cf76a2.scope: Deactivated successfully.
Oct 10 09:46:02 compute-0 sudo[88343]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct 10 09:46:02 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:02 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct 10 09:46:02 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct 10 09:46:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v100: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 4.4 scrub starts
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 4.4 scrub ok
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 5.b scrub starts
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 5.b scrub ok
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' 
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-0 ceph-mon[73551]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:03 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:03 compute-0 ceph-mon[73551]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 10 09:46:03 compute-0 ceph-mon[73551]: pgmap v100: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 3.14 scrub starts
Oct 10 09:46:03 compute-0 ceph-mon[73551]: 3.14 scrub ok
Oct 10 09:46:03 compute-0 sudo[88421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkemluvjhrybfjaxwcksuuayfkfddah ; /usr/bin/python3'
Oct 10 09:46:03 compute-0 sudo[88421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:03 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 10 09:46:03 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 10 09:46:03 compute-0 python3[88423]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.gkrssp/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:03 compute-0 podman[88424]: 2025-10-10 09:46:03.659538784 +0000 UTC m=+0.043832776 container create 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:03 compute-0 systemd[1]: Started libpod-conmon-84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82.scope.
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:03 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19541122245e67888144300a23b66f97bc11f5fc45c12ef78e60b75d28396965/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19541122245e67888144300a23b66f97bc11f5fc45c12ef78e60b75d28396965/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19541122245e67888144300a23b66f97bc11f5fc45c12ef78e60b75d28396965/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:03 compute-0 podman[88424]: 2025-10-10 09:46:03.6419385 +0000 UTC m=+0.026232502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:03 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct 10 09:46:03 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 10 09:46:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:03 compute-0 podman[88424]: 2025-10-10 09:46:03.748488227 +0000 UTC m=+0.132782229 container init 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 10 09:46:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:03 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct 10 09:46:03 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct 10 09:46:03 compute-0 podman[88424]: 2025-10-10 09:46:03.755530548 +0000 UTC m=+0.139824570 container start 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:46:03 compute-0 podman[88424]: 2025-10-10 09:46:03.759698012 +0000 UTC m=+0.143992024 container attach 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.gkrssp/server_addr}] v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/699590867' entity='client.admin' 
Oct 10 09:46:04 compute-0 systemd[1]: libpod-84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82.scope: Deactivated successfully.
Oct 10 09:46:04 compute-0 conmon[88440]: conmon 84dbeed8158b5741de58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82.scope/container/memory.events
Oct 10 09:46:04 compute-0 podman[88424]: 2025-10-10 09:46:04.195726987 +0000 UTC m=+0.580020969 container died 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-19541122245e67888144300a23b66f97bc11f5fc45c12ef78e60b75d28396965-merged.mount: Deactivated successfully.
Oct 10 09:46:04 compute-0 podman[88424]: 2025-10-10 09:46:04.235498282 +0000 UTC m=+0.619792264 container remove 84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82 (image=quay.io/ceph/ceph:v19, name=cool_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:46:04 compute-0 systemd[1]: libpod-conmon-84dbeed8158b5741de5819c7929a27e20f489ffa0a8bf65824082bcbd6205b82.scope: Deactivated successfully.
Oct 10 09:46:04 compute-0 sudo[88421]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.gkrssp (monmap changed)...
Oct 10 09:46:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.gkrssp (monmap changed)...
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:46:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:46:04 compute-0 sudo[88498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihztrtbnmozlqierpieupejhonllbyjm ; /usr/bin/python3'
Oct 10 09:46:04 compute-0 sudo[88498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.5 scrub starts
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.5 scrub ok
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.d scrub starts
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.d scrub ok
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.16 scrub starts
Oct 10 09:46:04 compute-0 ceph-mon[73551]: 5.16 scrub ok
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/699590867' entity='client.admin' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:04 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 10 09:46:04 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 10 09:46:04 compute-0 python3[88500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:04 compute-0 podman[88501]: 2025-10-10 09:46:04.648224897 +0000 UTC m=+0.060675643 container create e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:46:04 compute-0 systemd[1]: Started libpod-conmon-e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301.scope.
Oct 10 09:46:04 compute-0 podman[88501]: 2025-10-10 09:46:04.618732886 +0000 UTC m=+0.031183702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefbf2008b2717719fcd34a037acac05455901dbbfb6a243fd27928256e05a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefbf2008b2717719fcd34a037acac05455901dbbfb6a243fd27928256e05a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efefbf2008b2717719fcd34a037acac05455901dbbfb6a243fd27928256e05a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:04 compute-0 podman[88501]: 2025-10-10 09:46:04.739156779 +0000 UTC m=+0.151607585 container init e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:04 compute-0 podman[88501]: 2025-10-10 09:46:04.74589682 +0000 UTC m=+0.158347586 container start e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:46:04 compute-0 podman[88501]: 2025-10-10 09:46:04.749845625 +0000 UTC m=+0.162296391 container attach e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v101: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:04 compute-0 sudo[88539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:04 compute-0 sudo[88539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:04 compute-0 sudo[88539]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:05 compute-0 sudo[88564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:46:05 compute-0 sudo[88564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 10 09:46:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 6.6 scrub starts
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 6.6 scrub ok
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 4.8 scrub starts
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 4.8 scrub ok
Oct 10 09:46:05 compute-0 ceph-mon[73551]: Reconfiguring mgr.compute-2.gkrssp (monmap changed)...
Oct 10 09:46:05 compute-0 ceph-mon[73551]: Reconfiguring daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:46:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-0 ceph-mon[73551]: pgmap v101: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 4.13 scrub starts
Oct 10 09:46:05 compute-0 ceph-mon[73551]: 4.13 scrub ok
Oct 10 09:46:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:05 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct 10 09:46:05 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct 10 09:46:05 compute-0 sudo[88564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:05 compute-0 nostalgic_carver[88516]: module 'dashboard' is already disabled
Oct 10 09:46:05 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:06 compute-0 systemd[1]: libpod-e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301.scope: Deactivated successfully.
Oct 10 09:46:06 compute-0 podman[88501]: 2025-10-10 09:46:06.014305215 +0000 UTC m=+1.426755951 container died e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-efefbf2008b2717719fcd34a037acac05455901dbbfb6a243fd27928256e05a6-merged.mount: Deactivated successfully.
Oct 10 09:46:06 compute-0 podman[88501]: 2025-10-10 09:46:06.145458237 +0000 UTC m=+1.557908973 container remove e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301 (image=quay.io/ceph/ceph:v19, name=nostalgic_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:06 compute-0 systemd[1]: libpod-conmon-e37b0f90e40acdf4634869005ee22f6de501477eafb40184f09fb143fab3c301.scope: Deactivated successfully.
Oct 10 09:46:06 compute-0 sudo[88498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:06 compute-0 sudo[88656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmrcttvkvbnhqivgzgsnivitdgwkfoz ; /usr/bin/python3'
Oct 10 09:46:06 compute-0 sudo[88656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 3.2 scrub starts
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 3.2 scrub ok
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 4.9 scrub starts
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 4.9 scrub ok
Oct 10 09:46:06 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 3.c scrub starts
Oct 10 09:46:06 compute-0 ceph-mon[73551]: 3.c scrub ok
Oct 10 09:46:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mgrmap e11: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:06 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:46:06 compute-0 python3[88658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:06 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 10 09:46:06 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 10 09:46:06 compute-0 podman[88659]: 2025-10-10 09:46:06.539504461 +0000 UTC m=+0.043801084 container create de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:46:06 compute-0 systemd[1]: Started libpod-conmon-de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747.scope.
Oct 10 09:46:06 compute-0 sudo[88665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:06 compute-0 sudo[88665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:06 compute-0 sudo[88665]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:06 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a053adf6a85280ffccf8b0cc79d7413ab4c516997d65306583962a9ff0dfeb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a053adf6a85280ffccf8b0cc79d7413ab4c516997d65306583962a9ff0dfeb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a053adf6a85280ffccf8b0cc79d7413ab4c516997d65306583962a9ff0dfeb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:06 compute-0 podman[88659]: 2025-10-10 09:46:06.608160387 +0000 UTC m=+0.112457020 container init de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:06 compute-0 podman[88659]: 2025-10-10 09:46:06.614567947 +0000 UTC m=+0.118864560 container start de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:46:06 compute-0 podman[88659]: 2025-10-10 09:46:06.522600751 +0000 UTC m=+0.026897384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:06 compute-0 podman[88659]: 2025-10-10 09:46:06.617907852 +0000 UTC m=+0.122204465 container attach de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:06 compute-0 sudo[88702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:46:06 compute-0 sudo[88702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v102: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 10 09:46:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:07 compute-0 podman[88785]: 2025-10-10 09:46:07.113238013 +0000 UTC m=+0.059840776 container create a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:07 compute-0 systemd[1]: Started libpod-conmon-a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be.scope.
Oct 10 09:46:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:07 compute-0 podman[88785]: 2025-10-10 09:46:07.082374853 +0000 UTC m=+0.028977656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:07 compute-0 podman[88785]: 2025-10-10 09:46:07.189387376 +0000 UTC m=+0.135990139 container init a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:46:07 compute-0 podman[88785]: 2025-10-10 09:46:07.199673399 +0000 UTC m=+0.146276162 container start a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:07 compute-0 podman[88785]: 2025-10-10 09:46:07.204821026 +0000 UTC m=+0.151423749 container attach a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:07 compute-0 epic_bohr[88801]: 167 167
Oct 10 09:46:07 compute-0 systemd[1]: libpod-a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 podman[88806]: 2025-10-10 09:46:07.25416031 +0000 UTC m=+0.031392079 container died a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f88ab9646f9302b29d0e6f016393b2f7db98aa6658f3bc8f70c7e7e300ae4a-merged.mount: Deactivated successfully.
Oct 10 09:46:07 compute-0 podman[88806]: 2025-10-10 09:46:07.295474967 +0000 UTC m=+0.072706766 container remove a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bohr, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 09:46:07 compute-0 systemd[1]: libpod-conmon-a9a609ce23e72dc1c7d044e27c511ae3c3df717675e4b80221e20b8e1bc661be.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.1 deep-scrub starts
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.1 deep-scrub ok
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.0 scrub starts
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.0 scrub ok
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: pgmap v102: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.f deep-scrub starts
Oct 10 09:46:07 compute-0 ceph-mon[73551]: 3.f deep-scrub ok
Oct 10 09:46:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:07 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Oct 10 09:46:07 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  1: '-n'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  2: 'mgr.compute-0.xkdepb'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  3: '-f'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  4: '--setuser'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  5: 'ceph'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  6: '--setgroup'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  7: 'ceph'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr respawn  exe_path /proc/self/exe
Oct 10 09:46:07 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Oct 10 09:46:07 compute-0 podman[88828]: 2025-10-10 09:46:07.543134637 +0000 UTC m=+0.073694440 container create 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:07 compute-0 systemd[1]: libpod-de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 podman[88659]: 2025-10-10 09:46:07.563752725 +0000 UTC m=+1.068049348 container died de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:46:07 compute-0 systemd[1]: Started libpod-conmon-8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7.scope.
Oct 10 09:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a053adf6a85280ffccf8b0cc79d7413ab4c516997d65306583962a9ff0dfeb-merged.mount: Deactivated successfully.
Oct 10 09:46:07 compute-0 podman[88828]: 2025-10-10 09:46:07.509481122 +0000 UTC m=+0.040040925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:07 compute-0 sshd-session[75164]: Connection closed by 192.168.122.100 port 39206
Oct 10 09:46:07 compute-0 sshd-session[75137]: Connection closed by 192.168.122.100 port 39192
Oct 10 09:46:07 compute-0 sshd-session[75193]: Connection closed by 192.168.122.100 port 39218
Oct 10 09:46:07 compute-0 sshd-session[75108]: Connection closed by 192.168.122.100 port 39184
Oct 10 09:46:07 compute-0 sshd-session[75079]: Connection closed by 192.168.122.100 port 39178
Oct 10 09:46:07 compute-0 sshd-session[74904]: Connection closed by 192.168.122.100 port 39110
Oct 10 09:46:07 compute-0 sshd-session[75050]: Connection closed by 192.168.122.100 port 39174
Oct 10 09:46:07 compute-0 sshd-session[75021]: Connection closed by 192.168.122.100 port 39158
Oct 10 09:46:07 compute-0 sshd-session[74934]: Connection closed by 192.168.122.100 port 39134
Oct 10 09:46:07 compute-0 sshd-session[74992]: Connection closed by 192.168.122.100 port 39144
Oct 10 09:46:07 compute-0 sshd-session[74963]: Connection closed by 192.168.122.100 port 39138
Oct 10 09:46:07 compute-0 sshd-session[74905]: Connection closed by 192.168.122.100 port 39126
Oct 10 09:46:07 compute-0 podman[88659]: 2025-10-10 09:46:07.613465721 +0000 UTC m=+1.117762324 container remove de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747 (image=quay.io/ceph/ceph:v19, name=pensive_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 09:46:07 compute-0 sshd-session[75190]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[75018]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[75161]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[74960]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[75105]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[74899]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[74931]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[74989]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:07 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 sshd-session[74882]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 sshd-session[75047]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 28 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 sshd-session[75134]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:07 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:07 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 34 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 24 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 27 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 31 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 32 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 sudo[88656]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 29 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 33 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd[1]: libpod-conmon-de755d1ff3c8a1c47d812ee18aa68c730e5e6d04703950f9404d59eef672d747.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 22 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 25 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 26 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setuser ceph since I am not root
Oct 10 09:46:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:07 compute-0 sshd-session[75076]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 28.
Oct 10 09:46:07 compute-0 podman[88828]: 2025-10-10 09:46:07.661365795 +0000 UTC m=+0.191925558 container init 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:07 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 24.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Session 30 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 26.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 25.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 33.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 29.
Oct 10 09:46:07 compute-0 podman[88828]: 2025-10-10 09:46:07.673382268 +0000 UTC m=+0.203942031 container start 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 32.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 31.
Oct 10 09:46:07 compute-0 podman[88828]: 2025-10-10 09:46:07.677273152 +0000 UTC m=+0.207832955 container attach 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 22.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 27.
Oct 10 09:46:07 compute-0 systemd-logind[806]: Removed session 30.
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:07.796+0000 7ff0afdef140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:07.874+0000 7ff0afdef140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-0 sudo[88912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhbwjvcdqzejmlkrrskiadwglwgvrxm ; /usr/bin/python3'
Oct 10 09:46:07 compute-0 sudo[88912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:08 compute-0 distracted_payne[88853]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:46:08 compute-0 distracted_payne[88853]: --> All data devices are unavailable
Oct 10 09:46:08 compute-0 podman[88828]: 2025-10-10 09:46:08.033545819 +0000 UTC m=+0.564105592 container died 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:08 compute-0 systemd[1]: libpod-8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7.scope: Deactivated successfully.
Oct 10 09:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7898e260182aa5b51e209e75dc98f6aa5b6a7ccc0ef2919750dc974f751ebb7f-merged.mount: Deactivated successfully.
Oct 10 09:46:08 compute-0 podman[88828]: 2025-10-10 09:46:08.087835173 +0000 UTC m=+0.618394956 container remove 8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:08 compute-0 systemd[1]: libpod-conmon-8ba6999cf8cf776a235f6c76120a25c9dd037fccf0cee6c19378840d1b437fa7.scope: Deactivated successfully.
Oct 10 09:46:08 compute-0 python3[88915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:08 compute-0 sudo[88702]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:08 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 10 09:46:08 compute-0 systemd[1]: session-34.scope: Consumed 32.268s CPU time.
Oct 10 09:46:08 compute-0 systemd-logind[806]: Removed session 34.
Oct 10 09:46:08 compute-0 podman[88930]: 2025-10-10 09:46:08.18478401 +0000 UTC m=+0.046906581 container create 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:08 compute-0 systemd[1]: Started libpod-conmon-7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e.scope.
Oct 10 09:46:08 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:08 compute-0 podman[88930]: 2025-10-10 09:46:08.167558439 +0000 UTC m=+0.029681060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81d6f244207c626b71fef40aba17667565abd295ae2236849f17c7e006495da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81d6f244207c626b71fef40aba17667565abd295ae2236849f17c7e006495da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81d6f244207c626b71fef40aba17667565abd295ae2236849f17c7e006495da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:08 compute-0 podman[88930]: 2025-10-10 09:46:08.280788706 +0000 UTC m=+0.142911297 container init 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:46:08 compute-0 podman[88930]: 2025-10-10 09:46:08.289313778 +0000 UTC m=+0.151436339 container start 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:08 compute-0 podman[88930]: 2025-10-10 09:46:08.293571114 +0000 UTC m=+0.155693735 container attach 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 6.4 scrub starts
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 6.4 scrub ok
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 5.0 scrub starts
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 5.0 scrub ok
Oct 10 09:46:08 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:08 compute-0 ceph-mon[73551]: mgrmap e12: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 3.d scrub starts
Oct 10 09:46:08 compute-0 ceph-mon[73551]: 3.d scrub ok
Oct 10 09:46:08 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 10 09:46:08 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 10 09:46:08 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:46:08 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:08 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:08.657+0000 7ff0afdef140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:09.317+0000 7ff0afdef140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 6.0 scrub starts
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 6.0 scrub ok
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 4.2 scrub starts
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 4.2 scrub ok
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 5.3 scrub starts
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 5.3 scrub ok
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 5.9 scrub starts
Oct 10 09:46:09 compute-0 ceph-mon[73551]: 5.9 scrub ok
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:09.490+0000 7ff0afdef140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:46:09 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:09.558+0000 7ff0afdef140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:46:09 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:09.695+0000 7ff0afdef140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 4.19 scrub starts
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 4.19 scrub ok
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 3.6 scrub starts
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 3.6 scrub ok
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 3.10 scrub starts
Oct 10 09:46:10 compute-0 ceph-mon[73551]: 3.10 scrub ok
Oct 10 09:46:10 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 10 09:46:10 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 10 09:46:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:10.693+0000 7ff0afdef140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:10.927+0000 7ff0afdef140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.004+0000 7ff0afdef140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.070+0000 7ff0afdef140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.150+0000 7ff0afdef140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.228+0000 7ff0afdef140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 5.1a scrub starts
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 5.1a scrub ok
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 3.7 scrub starts
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 3.7 scrub ok
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 3.13 scrub starts
Oct 10 09:46:11 compute-0 ceph-mon[73551]: 3.13 scrub ok
Oct 10 09:46:11 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Oct 10 09:46:11 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.570+0000 7ff0afdef140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:11.667+0000 7ff0afdef140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:46:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:12.094+0000 7ff0afdef140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 4.1 scrub starts
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 4.1 scrub ok
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 4.0 scrub starts
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 4.0 scrub ok
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 6.a scrub starts
Oct 10 09:46:12 compute-0 ceph-mon[73551]: 6.a scrub ok
Oct 10 09:46:12 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 10 09:46:12 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 10 09:46:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:12.692+0000 7ff0afdef140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:12.775+0000 7ff0afdef140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:12.864+0000 7ff0afdef140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:46:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.021+0000 7ff0afdef140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.098+0000 7ff0afdef140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.261+0000 7ff0afdef140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.486+0000 7ff0afdef140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 6.1b scrub starts
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 6.1b scrub ok
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 4.7 scrub starts
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 4.7 scrub ok
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 6.8 scrub starts
Oct 10 09:46:13 compute-0 ceph-mon[73551]: 6.8 scrub ok
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:13 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 10 09:46:13 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.766+0000 7ff0afdef140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:13.835+0000 7ff0afdef140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x5603aafa5860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr handle_mgr_map Activating!
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr handle_mgr_map I am now activating
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.xkdepb(active, starting, since 0.0350206s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: balancer
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [balancer INFO root] Starting
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:46:13
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: cephadm
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: crash
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: dashboard
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: devicehealth
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [dashboard INFO sso] Loading SSO DB version=1
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: iostat
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Starting
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: nfs
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: orchestrator
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: pg_autoscaler
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: progress
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [progress INFO root] Loading...
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff031931700>, <progress.module.GhostEvent object at 0x7ff031931970>, <progress.module.GhostEvent object at 0x7ff0319319a0>, <progress.module.GhostEvent object at 0x7ff0319319d0>, <progress.module.GhostEvent object at 0x7ff031931a00>, <progress.module.GhostEvent object at 0x7ff031931a30>, <progress.module.GhostEvent object at 0x7ff031931a60>, <progress.module.GhostEvent object at 0x7ff031931a90>, <progress.module.GhostEvent object at 0x7ff031931ac0>, <progress.module.GhostEvent object at 0x7ff031931af0>, <progress.module.GhostEvent object at 0x7ff031931b20>] historic events
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] recovery thread starting
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] starting setup
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: rbd_support
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: restful
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: status
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [restful WARNING root] server not running: no certificate configured
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: telemetry
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"} v 0)
Oct 10 09:46:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:46:13 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] PerfHandler: starting
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: volumes
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TaskHandler: starting
Oct 10 09:46:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"} v 0)
Oct 10 09:46:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [rbd_support INFO root] setup complete
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 10 09:46:14 compute-0 sshd-session[89096]: Accepted publickey for ceph-admin from 192.168.122.100 port 54898 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:46:14 compute-0 systemd-logind[806]: New session 35 of user ceph-admin.
Oct 10 09:46:14 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Oct 10 09:46:14 compute-0 sshd-session[89096]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.module] Engine started.
Oct 10 09:46:14 compute-0 ceph-mon[73551]: 4.6 scrub starts
Oct 10 09:46:14 compute-0 ceph-mon[73551]: 4.6 scrub ok
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:14 compute-0 ceph-mon[73551]: 5.6 scrub starts
Oct 10 09:46:14 compute-0 ceph-mon[73551]: 5.6 scrub ok
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:14 compute-0 ceph-mon[73551]: osdmap e36: 3 total, 3 up, 3 in
Oct 10 09:46:14 compute-0 ceph-mon[73551]: mgrmap e13: compute-0.xkdepb(active, starting, since 0.0350206s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:14 compute-0 sudo[89112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:14 compute-0 sudo[89112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:14 compute-0 sudo[89112]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:14 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 10 09:46:14 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 10 09:46:14 compute-0 sudo[89137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:46:14 compute-0 sudo[89137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:14 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.xkdepb(active, since 1.07712s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct 10 09:46:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v3: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:14 compute-0 magical_kalam[88950]: Option GRAFANA_API_USERNAME updated
Oct 10 09:46:14 compute-0 systemd[1]: libpod-7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e.scope: Deactivated successfully.
Oct 10 09:46:15 compute-0 podman[89201]: 2025-10-10 09:46:15.010529985 +0000 UTC m=+0.031919457 container died 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81d6f244207c626b71fef40aba17667565abd295ae2236849f17c7e006495da-merged.mount: Deactivated successfully.
Oct 10 09:46:15 compute-0 podman[89201]: 2025-10-10 09:46:15.060087356 +0000 UTC m=+0.081476868 container remove 7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e (image=quay.io/ceph/ceph:v19, name=magical_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:15 compute-0 systemd[1]: libpod-conmon-7708fcb853af16c99ab961cfc59840d4155834599e015c17d37ae73818c65d0e.scope: Deactivated successfully.
Oct 10 09:46:15 compute-0 sudo[88912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:15 compute-0 podman[89248]: 2025-10-10 09:46:15.286824078 +0000 UTC m=+0.149437859 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:15 compute-0 sudo[89291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffyodztueorsdxeyopywcdtxyagobmkm ; /usr/bin/python3'
Oct 10 09:46:15 compute-0 sudo[89291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:15 compute-0 podman[89248]: 2025-10-10 09:46:15.385956641 +0000 UTC m=+0.248570422 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:15] ENGINE Bus STARTING
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:15] ENGINE Bus STARTING
Oct 10 09:46:15 compute-0 python3[89293]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 3.1b scrub starts
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 3.1b scrub ok
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 4.a scrub starts
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 4.a scrub ok
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 5.c scrub starts
Oct 10 09:46:15 compute-0 ceph-mon[73551]: 5.c scrub ok
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mgrmap e14: compute-0.xkdepb(active, since 1.07712s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:15 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 podman[89323]: 2025-10-10 09:46:15.551552895 +0000 UTC m=+0.048854009 container create b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:15] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:15] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:15] ENGINE Client ('192.168.122.100', 44336) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:15] ENGINE Client ('192.168.122.100', 44336) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:15 compute-0 systemd[1]: Started libpod-conmon-b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6.scope.
Oct 10 09:46:15 compute-0 podman[89323]: 2025-10-10 09:46:15.529721235 +0000 UTC m=+0.027022349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:15 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 10 09:46:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:15 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 10 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd23b303d50452ca9891a2776482cd551e003d096ba2297a0bbdd85639c5ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd23b303d50452ca9891a2776482cd551e003d096ba2297a0bbdd85639c5ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd23b303d50452ca9891a2776482cd551e003d096ba2297a0bbdd85639c5ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:15 compute-0 podman[89323]: 2025-10-10 09:46:15.668715016 +0000 UTC m=+0.166016130 container init b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:46:15 compute-0 podman[89323]: 2025-10-10 09:46:15.677698984 +0000 UTC m=+0.175000078 container start b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:46:15 compute-0 podman[89323]: 2025-10-10 09:46:15.684669083 +0000 UTC m=+0.181970287 container attach b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:15] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:15] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:15] ENGINE Bus STARTED
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:15] ENGINE Bus STARTED
Oct 10 09:46:15 compute-0 sudo[89137]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 sudo[89400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:15 compute-0 sudo[89400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:15 compute-0 sudo[89400]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v4: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:15 compute-0 sudo[89444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:46:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-0 sudo[89444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 xenodochial_swartz[89379]: Option GRAFANA_API_PASSWORD updated
Oct 10 09:46:16 compute-0 systemd[1]: libpod-b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6.scope: Deactivated successfully.
Oct 10 09:46:16 compute-0 podman[89323]: 2025-10-10 09:46:16.096098844 +0000 UTC m=+0.593399948 container died b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-94bd23b303d50452ca9891a2776482cd551e003d096ba2297a0bbdd85639c5ad-merged.mount: Deactivated successfully.
Oct 10 09:46:16 compute-0 podman[89323]: 2025-10-10 09:46:16.130036059 +0000 UTC m=+0.627337153 container remove b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6 (image=quay.io/ceph/ceph:v19, name=xenodochial_swartz, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:16 compute-0 systemd[1]: libpod-conmon-b63604c72aaf250bca4b06cd0913106f55e42f53921f289907cd516f5eee11e6.scope: Deactivated successfully.
Oct 10 09:46:16 compute-0 sudo[89291]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-0 sudo[89547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loplcvngeyijnwgzrhbpjlvffqrddcrs ; /usr/bin/python3'
Oct 10 09:46:16 compute-0 sudo[89444]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-0 sudo[89547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:16 compute-0 sudo[89550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:16 compute-0 sudo[89550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:16 compute-0 sudo[89550]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-0 python3[89549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:16 compute-0 sudo[89575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:46:16 compute-0 sudo[89575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 5.e deep-scrub starts
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 5.e deep-scrub ok
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 4.d scrub starts
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 4.d scrub ok
Oct 10 09:46:16 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:15] ENGINE Bus STARTING
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:15] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:16 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:15] ENGINE Client ('192.168.122.100', 44336) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 6.f scrub starts
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 6.f scrub ok
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: 3.a scrub starts
Oct 10 09:46:16 compute-0 podman[89598]: 2025-10-10 09:46:16.572617389 +0000 UTC m=+0.048326339 container create 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:46:16 compute-0 systemd[1]: Started libpod-conmon-0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900.scope.
Oct 10 09:46:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:16 compute-0 podman[89598]: 2025-10-10 09:46:16.552950574 +0000 UTC m=+0.028659554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff36fe073e0bd3360a3ab56ca4c77595e6431aa019a7746796cc5bedeaa2e45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff36fe073e0bd3360a3ab56ca4c77595e6431aa019a7746796cc5bedeaa2e45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff36fe073e0bd3360a3ab56ca4c77595e6431aa019a7746796cc5bedeaa2e45/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:16 compute-0 podman[89598]: 2025-10-10 09:46:16.660867759 +0000 UTC m=+0.136576729 container init 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:46:16 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Oct 10 09:46:16 compute-0 podman[89598]: 2025-10-10 09:46:16.669494565 +0000 UTC m=+0.145203515 container start 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:16 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Oct 10 09:46:16 compute-0 podman[89598]: 2025-10-10 09:46:16.673347557 +0000 UTC m=+0.149056557 container attach 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:16 compute-0 sudo[89575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 10 09:46:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:16 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 stupefied_blackwell[89615]: Option ALERTMANAGER_API_HOST updated
Oct 10 09:46:17 compute-0 systemd[1]: libpod-0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900.scope: Deactivated successfully.
Oct 10 09:46:17 compute-0 podman[89598]: 2025-10-10 09:46:17.089092217 +0000 UTC m=+0.564801167 container died 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff36fe073e0bd3360a3ab56ca4c77595e6431aa019a7746796cc5bedeaa2e45-merged.mount: Deactivated successfully.
Oct 10 09:46:17 compute-0 podman[89598]: 2025-10-10 09:46:17.121741807 +0000 UTC m=+0.597450757 container remove 0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900 (image=quay.io/ceph/ceph:v19, name=stupefied_blackwell, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:17 compute-0 systemd[1]: libpod-conmon-0edc7b1a4c47d4414a9147d80bb0a9dfeaf236ffe23edf5a23a0ae4de6be6900.scope: Deactivated successfully.
Oct 10 09:46:17 compute-0 sudo[89547]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 sudo[89700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btevsbhoqlbfnomuhdvultlhmyzfitxg ; /usr/bin/python3'
Oct 10 09:46:17 compute-0 sudo[89700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:17 compute-0 sudo[89690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:17 compute-0 sudo[89690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89690]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 sudo[89721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:17 compute-0 sudo[89721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89721]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 python3[89718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:17 compute-0 sudo[89746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-0 sudo[89746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89746]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.499671518 +0000 UTC m=+0.044415965 container create 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:17 compute-0 systemd[1]: Started libpod-conmon-338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9.scope.
Oct 10 09:46:17 compute-0 sudo[89777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:17 compute-0 sudo[89777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89777]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:15] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:17 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:15] ENGINE Bus STARTED
Oct 10 09:46:17 compute-0 ceph-mon[73551]: 4.1c scrub starts
Oct 10 09:46:17 compute-0 ceph-mon[73551]: 4.1c scrub ok
Oct 10 09:46:17 compute-0 ceph-mon[73551]: pgmap v4: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: 3.a scrub ok
Oct 10 09:46:17 compute-0 ceph-mon[73551]: 3.b deep-scrub starts
Oct 10 09:46:17 compute-0 ceph-mon[73551]: 3.b deep-scrub ok
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mgrmap e15: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:17 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbaa528902ed55f1ab649a234e5201c23fd3edb2d15c390fcde2d5359ebc1c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbaa528902ed55f1ab649a234e5201c23fd3edb2d15c390fcde2d5359ebc1c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbaa528902ed55f1ab649a234e5201c23fd3edb2d15c390fcde2d5359ebc1c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.478815042 +0000 UTC m=+0.023559509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.581658622 +0000 UTC m=+0.126403089 container init 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.59064092 +0000 UTC m=+0.135385407 container start 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.595615451 +0000 UTC m=+0.140359918 container attach 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:17 compute-0 sudo[89814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-0 sudo[89814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89814]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 10 09:46:17 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 10 09:46:17 compute-0 sudo[89865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-0 sudo[89865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89865]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 sudo[89907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-0 sudo[89907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89907]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 sudo[89932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:46:17 compute-0 sudo[89932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89932]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:17 compute-0 sudo[89957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:17 compute-0 sudo[89957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89957]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14403 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct 10 09:46:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-0 intelligent_solomon[89810]: Option PROMETHEUS_API_HOST updated
Oct 10 09:46:17 compute-0 systemd[1]: libpod-338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9.scope: Deactivated successfully.
Oct 10 09:46:17 compute-0 podman[89769]: 2025-10-10 09:46:17.975963316 +0000 UTC m=+0.520707773 container died 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:46:17 compute-0 sudo[89982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:17 compute-0 sudo[89982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-0 sudo[89982]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcbaa528902ed55f1ab649a234e5201c23fd3edb2d15c390fcde2d5359ebc1c4-merged.mount: Deactivated successfully.
Oct 10 09:46:18 compute-0 podman[89769]: 2025-10-10 09:46:18.016290519 +0000 UTC m=+0.561034966 container remove 338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9 (image=quay.io/ceph/ceph:v19, name=intelligent_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:18 compute-0 systemd[1]: libpod-conmon-338c7258b7f31acc93052b531c9e99d110882d70e84cdf4c7aa4ccf5718597c9.scope: Deactivated successfully.
Oct 10 09:46:18 compute-0 sudo[90011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-0 sudo[89700]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90011]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:18 compute-0 sudo[90045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:18 compute-0 sudo[90045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90045]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-0 sudo[90070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90070]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvxmitvdjdtkjfglcbbbsvujqafgpow ; /usr/bin/python3'
Oct 10 09:46:18 compute-0 sudo[90117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:18 compute-0 sudo[90144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-0 sudo[90144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 python3[90124]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:18 compute-0 sudo[90169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-0 sudo[90169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90169]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.385577215 +0000 UTC m=+0.044852181 container create 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:18 compute-0 systemd[1]: Started libpod-conmon-123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a.scope.
Oct 10 09:46:18 compute-0 sudo[90203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 sudo[90203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7967b15b0427114594f98c2d29e03ecc248bdaa68473bfd0efe45828d73895/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7967b15b0427114594f98c2d29e03ecc248bdaa68473bfd0efe45828d73895/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7967b15b0427114594f98c2d29e03ecc248bdaa68473bfd0efe45828d73895/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.36417345 +0000 UTC m=+0.023448236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.461275153 +0000 UTC m=+0.120549939 container init 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.467495456 +0000 UTC m=+0.126770222 container start 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.476422322 +0000 UTC m=+0.135697088 container attach 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:46:18 compute-0 sudo[90236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:18 compute-0 sudo[90236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90236]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:18 compute-0 sudo[90262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90262]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 6.1e scrub starts
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 6.1e scrub ok
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:18 compute-0 ceph-mon[73551]: from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 6.15 scrub starts
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 6.15 scrub ok
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 4.b scrub starts
Oct 10 09:46:18 compute-0 ceph-mon[73551]: 4.b scrub ok
Oct 10 09:46:18 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:18 compute-0 sudo[90289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:18 compute-0 sudo[90289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90289]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 10 09:46:18 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 10 09:46:18 compute-0 sudo[90331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:18 compute-0 sudo[90331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90331]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:18 compute-0 sudo[90356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90356]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.24217 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 10 09:46:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:18 compute-0 dreamy_newton[90232]: Option GRAFANA_API_URL updated
Oct 10 09:46:18 compute-0 sudo[90404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:18 compute-0 sudo[90404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90404]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 systemd[1]: libpod-123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a.scope: Deactivated successfully.
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.877510799 +0000 UTC m=+0.536785555 container died 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d7967b15b0427114594f98c2d29e03ecc248bdaa68473bfd0efe45828d73895-merged.mount: Deactivated successfully.
Oct 10 09:46:18 compute-0 podman[90192]: 2025-10-10 09:46:18.91424827 +0000 UTC m=+0.573523036 container remove 123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a (image=quay.io/ceph/ceph:v19, name=dreamy_newton, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:46:18 compute-0 systemd[1]: libpod-conmon-123f6002c226e89b55e8d2536c0f5740d7a15d10995cc8604eef62d12026247a.scope: Deactivated successfully.
Oct 10 09:46:18 compute-0 sudo[90117]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 sudo[90432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:18 compute-0 sudo[90432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-0 sudo[90432]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 sudo[90467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:18 compute-0 sudo[90467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90467]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 sudo[90536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zccfhvzflgnszqfecoipjgmsuwgnaehl ; /usr/bin/python3'
Oct 10 09:46:19 compute-0 sudo[90536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:19 compute-0 sudo[90496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:19 compute-0 sudo[90496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90496]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 sudo[90543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:19 compute-0 sudo[90543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90543]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 sudo[90568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-0 sudo[90568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90568]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 python3[90541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:19 compute-0 sudo[90594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:19 compute-0 sudo[90594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90594]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 podman[90593]: 2025-10-10 09:46:19.256917451 +0000 UTC m=+0.043653239 container create f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:19 compute-0 systemd[1]: Started libpod-conmon-f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2.scope.
Oct 10 09:46:19 compute-0 sudo[90631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-0 sudo[90631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90631]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a494482a286e255dd8b43a4c0c1237f21d35dcd5cbf21b5682751f4ce60ef45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a494482a286e255dd8b43a4c0c1237f21d35dcd5cbf21b5682751f4ce60ef45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a494482a286e255dd8b43a4c0c1237f21d35dcd5cbf21b5682751f4ce60ef45/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:19 compute-0 podman[90593]: 2025-10-10 09:46:19.239277236 +0000 UTC m=+0.026013034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:19 compute-0 podman[90593]: 2025-10-10 09:46:19.336179011 +0000 UTC m=+0.122914799 container init f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 10 09:46:19 compute-0 podman[90593]: 2025-10-10 09:46:19.341648899 +0000 UTC m=+0.128384667 container start f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:19 compute-0 podman[90593]: 2025-10-10 09:46:19.344464786 +0000 UTC m=+0.131200584 container attach f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:46:19 compute-0 sudo[90686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-0 sudo[90686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90686]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 sudo[90711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-0 sudo[90711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90711]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:19 compute-0 sudo[90755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 sudo[90755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-0 sudo[90755]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 3.1a scrub starts
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 3.1a scrub ok
Oct 10 09:46:19 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-0 ceph-mon[73551]: pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:19 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='client.14403 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:19 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 6.7 scrub starts
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 6.7 scrub ok
Oct 10 09:46:19 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mgrmap e16: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 5.a scrub starts
Oct 10 09:46:19 compute-0 ceph-mon[73551]: 5.a scrub ok
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:19 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Oct 10 09:46:19 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Oct 10 09:46:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 10 09:46:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v6: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:46:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev ccca3fda-2c8a-493a-b6c7-98a4fd17b12f (Updating node-exporter deployment (+3 -> 3))
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  1: '-n'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  2: 'mgr.compute-0.xkdepb'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  3: '-f'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  4: '--setuser'
Oct 10 09:46:20 compute-0 sudo[90781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  5: 'ceph'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  6: '--setgroup'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  7: 'ceph'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:46:20 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.xkdepb(active, since 6s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:20 compute-0 sudo[90781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-0 sudo[90781]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:20 compute-0 ceph-mon[73551]: from='client.24217 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 4.15 scrub starts
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 4.15 scrub ok
Oct 10 09:46:20 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:20 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 5.7 scrub starts
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 5.7 scrub ok
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 6.9 deep-scrub starts
Oct 10 09:46:20 compute-0 ceph-mon[73551]: 6.9 deep-scrub ok
Oct 10 09:46:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:20 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 ceph-mon[73551]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-0 systemd[1]: libpod-f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2.scope: Deactivated successfully.
Oct 10 09:46:20 compute-0 podman[90593]: 2025-10-10 09:46:20.67320406 +0000 UTC m=+1.459939838 container died f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:20 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 10 09:46:20 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 10 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a494482a286e255dd8b43a4c0c1237f21d35dcd5cbf21b5682751f4ce60ef45-merged.mount: Deactivated successfully.
Oct 10 09:46:20 compute-0 podman[90593]: 2025-10-10 09:46:20.717373497 +0000 UTC m=+1.504109265 container remove f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2 (image=quay.io/ceph/ceph:v19, name=happy_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:46:20 compute-0 systemd[1]: libpod-conmon-f61dd2e2cd285109e6f8604aa2f663a602f08c4e027a53e238050521381a2ae2.scope: Deactivated successfully.
Oct 10 09:46:20 compute-0 sshd-session[89110]: Read error from remote host 192.168.122.100 port 54898: Connection reset by peer
Oct 10 09:46:20 compute-0 sudo[90536]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-0 sshd-session[89096]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:20 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 10 09:46:20 compute-0 systemd[1]: session-35.scope: Consumed 4.633s CPU time.
Oct 10 09:46:20 compute-0 systemd-logind[806]: Session 35 logged out. Waiting for processes to exit.
Oct 10 09:46:20 compute-0 systemd-logind[806]: Removed session 35.
Oct 10 09:46:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setuser ceph since I am not root
Oct 10 09:46:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:20.890+0000 7fe5a189d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:20 compute-0 sudo[90862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszqaupmtdjqdwyofeamtvuhvhrxsalb ; /usr/bin/python3'
Oct 10 09:46:20 compute-0 sudo[90862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:20 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:20.968+0000 7fe5a189d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-0 python3[90864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.125302048 +0000 UTC m=+0.043895807 container create 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:46:21 compute-0 systemd[1]: Started libpod-conmon-111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4.scope.
Oct 10 09:46:21 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd685393026d9b273c4f50b58db41ee7daef1e5eacc568e036e14f8b4bdc07f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd685393026d9b273c4f50b58db41ee7daef1e5eacc568e036e14f8b4bdc07f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd685393026d9b273c4f50b58db41ee7daef1e5eacc568e036e14f8b4bdc07f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.10407879 +0000 UTC m=+0.022672599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.207905923 +0000 UTC m=+0.126499802 container init 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.215041178 +0000 UTC m=+0.133634947 container start 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.225264129 +0000 UTC m=+0.143857938 container attach 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 10 09:46:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:21 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 10 09:46:21 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 4.3 scrub starts
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 4.3 scrub ok
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 5.2 scrub starts
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 5.2 scrub ok
Oct 10 09:46:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:21 compute-0 ceph-mon[73551]: mgrmap e17: compute-0.xkdepb(active, since 6s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 6.b scrub starts
Oct 10 09:46:21 compute-0 ceph-mon[73551]: 6.b scrub ok
Oct 10 09:46:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:21 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:46:21 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:21 compute-0 systemd[1]: libpod-111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4.scope: Deactivated successfully.
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.720632091 +0000 UTC m=+0.639225850 container died 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd685393026d9b273c4f50b58db41ee7daef1e5eacc568e036e14f8b4bdc07f8-merged.mount: Deactivated successfully.
Oct 10 09:46:21 compute-0 podman[90865]: 2025-10-10 09:46:21.753387876 +0000 UTC m=+0.671981645 container remove 111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4 (image=quay.io/ceph/ceph:v19, name=exciting_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:21 compute-0 systemd[1]: libpod-conmon-111af2c7fbc4fcc43b1eac3b0f31b4aa9495385df76d3ee2d15c52fcb21a03d4.scope: Deactivated successfully.
Oct 10 09:46:21 compute-0 sudo[90862]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:21 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:21.788+0000 7fe5a189d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:22.480+0000 7fe5a189d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 python3[91003]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:22.636+0000 7fe5a189d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 10 09:46:22 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 4.1d scrub starts
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 4.1d scrub ok
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 6.5 scrub starts
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 6.5 scrub ok
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 4.17 scrub starts
Oct 10 09:46:22 compute-0 ceph-mon[73551]: 4.17 scrub ok
Oct 10 09:46:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:22 compute-0 ceph-mon[73551]: mgrmap e18: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:22.715+0000 7fe5a189d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:22.850+0000 7fe5a189d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-0 python3[91074]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089582.2662597-33998-137047973103289/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:23 compute-0 sudo[91122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keucnzsunicgbrhhpbhytfpmhciuenpd ; /usr/bin/python3'
Oct 10 09:46:23 compute-0 sudo[91122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:23 compute-0 python3[91124]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:23 compute-0 podman[91125]: 2025-10-10 09:46:23.668517436 +0000 UTC m=+0.077869163 container create 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:23 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 10 09:46:23 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 3.1d scrub starts
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 3.1d scrub ok
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 3.3 deep-scrub starts
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 3.3 deep-scrub ok
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 4.16 scrub starts
Oct 10 09:46:23 compute-0 ceph-mon[73551]: 4.16 scrub ok
Oct 10 09:46:23 compute-0 systemd[1]: Started libpod-conmon-9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d.scope.
Oct 10 09:46:23 compute-0 podman[91125]: 2025-10-10 09:46:23.638806927 +0000 UTC m=+0.048158694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:23 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3014f4ca22d28bd11d8b657d329f22b1b90a7e527fe117cbe348e7230becff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3014f4ca22d28bd11d8b657d329f22b1b90a7e527fe117cbe348e7230becff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3014f4ca22d28bd11d8b657d329f22b1b90a7e527fe117cbe348e7230becff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:23 compute-0 podman[91125]: 2025-10-10 09:46:23.772819416 +0000 UTC m=+0.182171173 container init 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:23 compute-0 podman[91125]: 2025-10-10 09:46:23.785091058 +0000 UTC m=+0.194442745 container start 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:46:23 compute-0 podman[91125]: 2025-10-10 09:46:23.789031033 +0000 UTC m=+0.198382920 container attach 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:23 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:23.833+0000 7fe5a189d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.070+0000 7fe5a189d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.147+0000 7fe5a189d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.208+0000 7fe5a189d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.287+0000 7fe5a189d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.351+0000 7fe5a189d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.682+0000 7fe5a189d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 3.9 deep-scrub starts
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 3.9 deep-scrub ok
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.1 scrub starts
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.1 scrub ok
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.17 scrub starts
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.17 scrub ok
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.12 scrub starts
Oct 10 09:46:24 compute-0 ceph-mon[73551]: 5.12 scrub ok
Oct 10 09:46:24 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 10 09:46:24 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:46:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:24.790+0000 7fe5a189d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:46:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:25.308+0000 7fe5a189d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mon[73551]: 3.5 scrub starts
Oct 10 09:46:25 compute-0 ceph-mon[73551]: 3.5 scrub ok
Oct 10 09:46:25 compute-0 ceph-mon[73551]: 6.14 scrub starts
Oct 10 09:46:25 compute-0 ceph-mon[73551]: 6.14 scrub ok
Oct 10 09:46:25 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 10 09:46:25 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:25.885+0000 7fe5a189d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:25.951+0000 7fe5a189d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.027+0000 7fe5a189d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.168+0000 7fe5a189d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.238+0000 7fe5a189d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.396+0000 7fe5a189d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.610+0000 7fe5a189d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 5.4 scrub starts
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 5.4 scrub ok
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 4.e deep-scrub starts
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 4.e deep-scrub ok
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 3.12 scrub starts
Oct 10 09:46:26 compute-0 ceph-mon[73551]: 3.12 scrub ok
Oct 10 09:46:26 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:26 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.xkdepb(active, since 12s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:26 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 10 09:46:26 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.870+0000 7fe5a189d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:26.942+0000 7fe5a189d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x559796051860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  1: '-n'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  2: 'mgr.compute-0.xkdepb'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  3: '-f'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  4: '--setuser'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  5: 'ceph'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  6: '--setgroup'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  7: 'ceph'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 10 09:46:26 compute-0 ceph-mgr[73845]: mgr respawn  exe_path /proc/self/exe
Oct 10 09:46:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.xkdepb(active, starting, since 0.0290775s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setuser ceph since I am not root
Oct 10 09:46:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:27.182+0000 7f9bd763a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:27.261+0000 7f9bd763a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 6.12 deep-scrub starts
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 6.12 deep-scrub ok
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 5.f scrub starts
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 5.f scrub ok
Oct 10 09:46:27 compute-0 ceph-mon[73551]: mgrmap e19: compute-0.xkdepb(active, since 12s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 5.14 scrub starts
Oct 10 09:46:27 compute-0 ceph-mon[73551]: 5.14 scrub ok
Oct 10 09:46:27 compute-0 ceph-mon[73551]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:27 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:27 compute-0 ceph-mon[73551]: osdmap e37: 3 total, 3 up, 3 in
Oct 10 09:46:27 compute-0 ceph-mon[73551]: mgrmap e20: compute-0.xkdepb(active, starting, since 0.0290775s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:27 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:27 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:27 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 10 09:46:27 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 10 09:46:27 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.xkdepb(active, starting, since 1.0465s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:28.193+0000 7f9bd763a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:28 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.1c scrub starts
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.1c scrub ok
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.3 scrub starts
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.3 scrub ok
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.16 scrub starts
Oct 10 09:46:28 compute-0 ceph-mon[73551]: 6.16 scrub ok
Oct 10 09:46:28 compute-0 ceph-mon[73551]: mgrmap e21: compute-0.xkdepb(active, starting, since 1.0465s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:28 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:28.897+0000 7f9bd763a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:29.083+0000 7f9bd763a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:29.159+0000 7f9bd763a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:29.306+0000 7f9bd763a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:29 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct 10 09:46:29 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 5.13 scrub starts
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 5.13 scrub ok
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.2 scrub starts
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.2 scrub ok
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.11 scrub starts
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.11 scrub ok
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.1 scrub starts
Oct 10 09:46:29 compute-0 ceph-mon[73551]: 6.1 scrub ok
Oct 10 09:46:29 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.404+0000 7f9bd763a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.618+0000 7f9bd763a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.695+0000 7f9bd763a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct 10 09:46:30 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.759+0000 7f9bd763a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 6.d scrub starts
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 6.d scrub ok
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 6.10 scrub starts
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 6.10 scrub ok
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 3.8 scrub starts
Oct 10 09:46:30 compute-0 ceph-mon[73551]: 3.8 scrub ok
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.835+0000 7f9bd763a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:46:30 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 10 09:46:30 compute-0 systemd[74886]: Activating special unit Exit the Session...
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped target Main User Target.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped target Basic System.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped target Paths.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped target Sockets.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped target Timers.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 09:46:30 compute-0 systemd[74886]: Closed D-Bus User Message Bus Socket.
Oct 10 09:46:30 compute-0 systemd[74886]: Stopped Create User's Volatile Files and Directories.
Oct 10 09:46:30 compute-0 systemd[74886]: Removed slice User Application Slice.
Oct 10 09:46:30 compute-0 systemd[74886]: Reached target Shutdown.
Oct 10 09:46:30 compute-0 systemd[74886]: Finished Exit the Session.
Oct 10 09:46:30 compute-0 systemd[74886]: Reached target Exit the Session.
Oct 10 09:46:30 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 10 09:46:30 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 10 09:46:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:30.903+0000 7f9bd763a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:30 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 10 09:46:30 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 10 09:46:30 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 10 09:46:30 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 10 09:46:30 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 10 09:46:30 compute-0 systemd[1]: user-42477.slice: Consumed 38.699s CPU time.
Oct 10 09:46:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:31.226+0000 7f9bd763a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:31.318+0000 7f9bd763a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:31.740+0000 7f9bd763a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:46:31 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 10 09:46:31 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 6.e scrub starts
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 6.e scrub ok
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 6.13 scrub starts
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 6.13 scrub ok
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 2.15 scrub starts
Oct 10 09:46:31 compute-0 ceph-mon[73551]: 2.15 scrub ok
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.332+0000 7f9bd763a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.408+0000 7f9bd763a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.492+0000 7f9bd763a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:46:32 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:32 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.647+0000 7f9bd763a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.717+0000 7f9bd763a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:32 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct 10 09:46:32 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 5.1c scrub starts
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 5.1c scrub ok
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 5.1e scrub starts
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 5.1e scrub ok
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 2.12 deep-scrub starts
Oct 10 09:46:32 compute-0 ceph-mon[73551]: 2.12 deep-scrub ok
Oct 10 09:46:32 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:32 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:32 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.xkdepb(active, starting, since 5s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:32.887+0000 7f9bd763a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:33.098+0000 7f9bd763a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:33.363+0000 7f9bd763a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:46:33.437+0000 7f9bd763a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x55f665c51860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.xkdepb(active, starting, since 0.0311202s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr handle_mgr_map Activating!
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr handle_mgr_map I am now activating
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: balancer
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:46:33
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: cephadm
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: crash
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: dashboard
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO sso] Loading SSO DB version=1
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: devicehealth
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: iostat
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: nfs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: orchestrator
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: pg_autoscaler
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: progress
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [progress INFO root] Loading...
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f9b5519e730>, <progress.module.GhostEvent object at 0x7f9b5519e700>, <progress.module.GhostEvent object at 0x7f9b5519e5e0>, <progress.module.GhostEvent object at 0x7f9b5519e670>, <progress.module.GhostEvent object at 0x7f9b5519e760>, <progress.module.GhostEvent object at 0x7f9b5519e790>, <progress.module.GhostEvent object at 0x7f9b5519e7c0>, <progress.module.GhostEvent object at 0x7f9b5519e7f0>, <progress.module.GhostEvent object at 0x7f9b5519e820>, <progress.module.GhostEvent object at 0x7f9b5519e850>, <progress.module.GhostEvent object at 0x7f9b5519e880>] historic events
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] recovery thread starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] starting setup
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: rbd_support
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: restful
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: status
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: telemetry
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [restful WARNING root] server not running: no certificate configured
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] PerfHandler: starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TaskHandler: starting
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"} v 0)
Oct 10 09:46:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: volumes
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] setup complete
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 10 09:46:33 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 10 09:46:33 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 4.c scrub starts
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 4.c scrub ok
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 6.1d scrub starts
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 6.1d scrub ok
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 2.13 scrub starts
Oct 10 09:46:33 compute-0 ceph-mon[73551]: 2.13 scrub ok
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mgrmap e22: compute-0.xkdepb(active, starting, since 5s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:33 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:33 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:33 compute-0 ceph-mon[73551]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:33 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:33 compute-0 ceph-mon[73551]: osdmap e38: 3 total, 3 up, 3 in
Oct 10 09:46:33 compute-0 ceph-mon[73551]: mgrmap e23: compute-0.xkdepb(active, starting, since 0.0311202s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:33 compute-0 sshd-session[91317]: Accepted publickey for ceph-admin from 192.168.122.100 port 49384 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:46:33 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 09:46:33 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 09:46:33 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.module] Engine started.
Oct 10 09:46:33 compute-0 systemd-logind[806]: New session 36 of user ceph-admin.
Oct 10 09:46:34 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 09:46:34 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 10 09:46:34 compute-0 systemd[91328]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:34 compute-0 systemd[91328]: Queued start job for default target Main User Target.
Oct 10 09:46:34 compute-0 systemd[91328]: Created slice User Application Slice.
Oct 10 09:46:34 compute-0 systemd[91328]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:46:34 compute-0 systemd[91328]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:46:34 compute-0 systemd[91328]: Reached target Paths.
Oct 10 09:46:34 compute-0 systemd[91328]: Reached target Timers.
Oct 10 09:46:34 compute-0 systemd[91328]: Starting D-Bus User Message Bus Socket...
Oct 10 09:46:34 compute-0 systemd[91328]: Starting Create User's Volatile Files and Directories...
Oct 10 09:46:34 compute-0 systemd[91328]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:46:34 compute-0 systemd[91328]: Reached target Sockets.
Oct 10 09:46:34 compute-0 systemd[91328]: Finished Create User's Volatile Files and Directories.
Oct 10 09:46:34 compute-0 systemd[91328]: Reached target Basic System.
Oct 10 09:46:34 compute-0 systemd[91328]: Reached target Main User Target.
Oct 10 09:46:34 compute-0 systemd[91328]: Startup finished in 150ms.
Oct 10 09:46:34 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 10 09:46:34 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Oct 10 09:46:34 compute-0 sshd-session[91317]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:34 compute-0 sudo[91344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:34 compute-0 sudo[91344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:34 compute-0 sudo[91344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:34 compute-0 sudo[91369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:46:34 compute-0 sudo[91369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.xkdepb(active, since 1.0568s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 09:46:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0[73547]: 2025-10-10T09:46:34.511+0000 7f5d41f9c640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v3: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e2 new map
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-10-10T09:46:34:511425+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:46:34.511367+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 10 09:46:34 compute-0 systemd[1]: libpod-9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d.scope: Deactivated successfully.
Oct 10 09:46:34 compute-0 podman[91395]: 2025-10-10 09:46:34.624679387 +0000 UTC m=+0.030008221 container died 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea3014f4ca22d28bd11d8b657d329f22b1b90a7e527fe117cbe348e7230becff-merged.mount: Deactivated successfully.
Oct 10 09:46:34 compute-0 podman[91395]: 2025-10-10 09:46:34.671947249 +0000 UTC m=+0.077276053 container remove 9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d (image=quay.io/ceph/ceph:v19, name=distracted_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:46:34 compute-0 systemd[1]: libpod-conmon-9d4f2444188e6b9796734e2db13e4f5bf4af1e918286c8af9824a382a711677d.scope: Deactivated successfully.
Oct 10 09:46:34 compute-0 sudo[91122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:34 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 10 09:46:34 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:34] ENGINE Bus STARTING
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:34] ENGINE Bus STARTING
Oct 10 09:46:34 compute-0 sudo[91482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdumygdaixwcfsssvcrsyaepxoruwilo ; /usr/bin/python3'
Oct 10 09:46:34 compute-0 sudo[91482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 3.1c scrub starts
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 3.1c scrub ok
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 3.19 deep-scrub starts
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 3.19 deep-scrub ok
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 2.18 scrub starts
Oct 10 09:46:34 compute-0 ceph-mon[73551]: 2.18 scrub ok
Oct 10 09:46:34 compute-0 ceph-mon[73551]: mgrmap e24: compute-0.xkdepb(active, since 1.0568s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-0 ceph-mon[73551]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 09:46:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 09:46:34 compute-0 ceph-mon[73551]: osdmap e39: 3 total, 3 up, 3 in
Oct 10 09:46:34 compute-0 ceph-mon[73551]: fsmap cephfs:0
Oct 10 09:46:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:34] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:34 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:34] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:35 compute-0 python3[91486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:35] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:35] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:35] ENGINE Bus STARTED
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:35] ENGINE Bus STARTED
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:46:35] ENGINE Client ('192.168.122.100', 60804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:46:35] ENGINE Client ('192.168.122.100', 60804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.071455991 +0000 UTC m=+0.043539835 container create 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 09:46:35 compute-0 systemd[1]: Started libpod-conmon-15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d.scope.
Oct 10 09:46:35 compute-0 podman[91530]: 2025-10-10 09:46:35.142715217 +0000 UTC m=+0.098345146 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.050843584 +0000 UTC m=+0.022927438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878c625b74f6f8b45edcf1606fbfa65ffee91b0cbb8db478082a6f889859708a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878c625b74f6f8b45edcf1606fbfa65ffee91b0cbb8db478082a6f889859708a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878c625b74f6f8b45edcf1606fbfa65ffee91b0cbb8db478082a6f889859708a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.187304167 +0000 UTC m=+0.159388001 container init 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.207087497 +0000 UTC m=+0.179171331 container start 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.210872517 +0000 UTC m=+0.182956371 container attach 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:46:35 compute-0 podman[91530]: 2025-10-10 09:46:35.244946176 +0000 UTC m=+0.200576035 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 laughing_allen[91560]: Scheduled mds.cephfs update...
Oct 10 09:46:35 compute-0 systemd[1]: libpod-15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d.scope: Deactivated successfully.
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.600514909 +0000 UTC m=+0.572598833 container died 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:46:35 compute-0 sudo[91369]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-878c625b74f6f8b45edcf1606fbfa65ffee91b0cbb8db478082a6f889859708a-merged.mount: Deactivated successfully.
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 09:46:35 compute-0 podman[91529]: 2025-10-10 09:46:35.657016708 +0000 UTC m=+0.629100542 container remove 15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d (image=quay.io/ceph/ceph:v19, name=laughing_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:35 compute-0 systemd[1]: libpod-conmon-15de71bfbd63ab8f1f0457389b07f1232e9bad7dbddb5e488b217461cdbd9f6d.scope: Deactivated successfully.
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 sudo[91482]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:35 compute-0 sudo[91675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:35 compute-0 sudo[91675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:35 compute-0 sudo[91675]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 10 09:46:35 compute-0 sudo[91700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:46:35 compute-0 sudo[91700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:35 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 10 09:46:35 compute-0 sudo[91748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjrtzzomedelxgufjyxekkmgfbvnyltg ; /usr/bin/python3'
Oct 10 09:46:35 compute-0 sudo[91748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 4.1b scrub starts
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 4.1b scrub ok
Oct 10 09:46:35 compute-0 ceph-mon[73551]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 2.19 scrub starts
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 2.19 scrub ok
Oct 10 09:46:35 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:34] ENGINE Bus STARTING
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 2.10 scrub starts
Oct 10 09:46:35 compute-0 ceph-mon[73551]: 2.10 scrub ok
Oct 10 09:46:35 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:34] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:35 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:35] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:35 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:35] ENGINE Bus STARTED
Oct 10 09:46:35 compute-0 ceph-mon[73551]: [10/Oct/2025:09:46:35] ENGINE Client ('192.168.122.100', 60804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 python3[91750]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:36 compute-0 podman[91764]: 2025-10-10 09:46:36.083869059 +0000 UTC m=+0.046434295 container create 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:36 compute-0 systemd[1]: Started libpod-conmon-69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d.scope.
Oct 10 09:46:36 compute-0 podman[91764]: 2025-10-10 09:46:36.064972371 +0000 UTC m=+0.027537637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa7cb52f257affb65990172e5c7386ff8f3287b88823fe9c317ae3150fa68c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa7cb52f257affb65990172e5c7386ff8f3287b88823fe9c317ae3150fa68c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa7cb52f257affb65990172e5c7386ff8f3287b88823fe9c317ae3150fa68c8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:36 compute-0 podman[91764]: 2025-10-10 09:46:36.177970179 +0000 UTC m=+0.140535425 container init 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:46:36 compute-0 podman[91764]: 2025-10-10 09:46:36.190420566 +0000 UTC m=+0.152985792 container start 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:36 compute-0 podman[91764]: 2025-10-10 09:46:36.193368918 +0000 UTC m=+0.155934144 container attach 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 09:46:36 compute-0 sudo[91700]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:36 compute-0 sudo[91820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:36 compute-0 sudo[91820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:36 compute-0 sudo[91820]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:36 compute-0 sudo[91845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:46:36 compute-0 sudo[91845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14481 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:46:36 compute-0 sudo[91845]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:36 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:36 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 10 09:46:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:36 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 5.1b scrub starts
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 5.1b scrub ok
Oct 10 09:46:36 compute-0 ceph-mon[73551]: pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:36 compute-0 ceph-mon[73551]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 2.6 scrub starts
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 2.6 scrub ok
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 2.f scrub starts
Oct 10 09:46:36 compute-0 ceph-mon[73551]: 2.f scrub ok
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: mgrmap e25: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 sudo[91890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:37 compute-0 sudo[91890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[91890]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 sudo[91915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:37 compute-0 sudo[91915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[91915]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 sudo[91940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-0 sudo[91940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[91940]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v6: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 10 09:46:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 10 09:46:37 compute-0 sudo[91965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:37 compute-0 sudo[91965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[91965]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 sudo[91990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-0 sudo[91990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[91990]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 sudo[92038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-0 sudo[92038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[92038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 40 pg[8.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:37 compute-0 sudo[92063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-0 sudo[92063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 4.18 scrub starts
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 4.18 scrub ok
Oct 10 09:46:37 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:37 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='client.14481 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:37 compute-0 sudo[92063]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 2.e scrub starts
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 2.e scrub ok
Oct 10 09:46:37 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:37 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 2.c scrub starts
Oct 10 09:46:37 compute-0 ceph-mon[73551]: 2.c scrub ok
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 10 09:46:37 compute-0 ceph-mon[73551]: osdmap e40: 3 total, 3 up, 3 in
Oct 10 09:46:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 10 09:46:37 compute-0 sudo[92088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:46:37 compute-0 sudo[92088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-0 sudo[92088]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 sudo[92113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:38 compute-0 sudo[92113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92113]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 sudo[92138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:38 compute-0 sudo[92138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92138]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 sudo[92163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-0 sudo[92163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92163]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:38 compute-0 sudo[92188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:38 compute-0 sudo[92188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92188]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 sudo[92213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-0 sudo[92213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92213]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 sudo[92261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-0 sudo[92261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92261]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 10 09:46:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 10 09:46:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 10 09:46:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 41 pg[8.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:46:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:46:38 compute-0 sudo[92286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-0 sudo[92286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-0 sudo[92286]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 systemd[1]: libpod-69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d.scope: Deactivated successfully.
Oct 10 09:46:38 compute-0 podman[91764]: 2025-10-10 09:46:38.646157773 +0000 UTC m=+2.608723019 container died 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa7cb52f257affb65990172e5c7386ff8f3287b88823fe9c317ae3150fa68c8-merged.mount: Deactivated successfully.
Oct 10 09:46:38 compute-0 sudo[92322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 sudo[92322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 podman[91764]: 2025-10-10 09:46:38.695578839 +0000 UTC m=+2.658144065 container remove 69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d (image=quay.io/ceph/ceph:v19, name=strange_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:46:38 compute-0 sudo[92322]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 systemd[1]: libpod-conmon-69465f936521137dac6ecaef223bb196d0c33eb5843cb1c6a062e1bf4893983d.scope: Deactivated successfully.
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 sudo[91748]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 sudo[92357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:38 compute-0 sudo[92357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92357]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 sudo[92382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:38 compute-0 sudo[92382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92382]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 sudo[92407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:38 compute-0 sudo[92407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92407]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: 4.1a scrub starts
Oct 10 09:46:38 compute-0 ceph-mon[73551]: 4.1a scrub ok
Oct 10 09:46:38 compute-0 ceph-mon[73551]: pgmap v6: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: 2.d scrub starts
Oct 10 09:46:38 compute-0 ceph-mon[73551]: 2.d scrub ok
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-0 ceph-mon[73551]: mgrmap e26: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:38 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 10 09:46:38 compute-0 ceph-mon[73551]: osdmap e41: 3 total, 3 up, 3 in
Oct 10 09:46:38 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:38 compute-0 sudo[92432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:38 compute-0 sudo[92432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-0 sudo[92432]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92457]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92528]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92582]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splqdqgoqqpkfsuviopwmyldgmcxrupk ; /usr/bin/python3'
Oct 10 09:46:39 compute-0 sudo[92628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:39 compute-0 sudo[92632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 sudo[92632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92632]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 sudo[92658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:39 compute-0 sudo[92658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92658]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 python3[92633]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:39 compute-0 sudo[92628]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:39 compute-0 sudo[92683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:39 compute-0 sudo[92683]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v9: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:39 compute-0 sudo[92708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92708]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 10 09:46:39 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 10 09:46:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 10 09:46:39 compute-0 sudo[92757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:39 compute-0 sudo[92757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92757]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjcfnexxphqwqlvwqatzqkmhvlyogpl ; /usr/bin/python3'
Oct 10 09:46:39 compute-0 sudo[92848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:39 compute-0 sudo[92806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92806]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 python3[92853]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089599.1135263-34029-36470824556143/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f4f20d3bcbb08befb7837fd0e595f186c33a7cc2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:46:39 compute-0 sudo[92879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92879]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92848]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-0 sudo[92904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92904]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 sudo[92953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 sudo[92953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-0 sudo[92953]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mon[73551]: 4.5 scrub starts
Oct 10 09:46:39 compute-0 ceph-mon[73551]: 4.5 scrub ok
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mon[73551]: 2.5 scrub starts
Oct 10 09:46:39 compute-0 ceph-mon[73551]: 2.5 scrub ok
Oct 10 09:46:39 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-0 ceph-mon[73551]: osdmap e42: 3 total, 3 up, 3 in
Oct 10 09:46:39 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 sudo[93001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyoeeseqgurcfxfetfgggvbqgchnvot ; /usr/bin/python3'
Oct 10 09:46:40 compute-0 sudo[93001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 597e1707-426f-413e-8952-e3a64fc1a519 (Updating node-exporter deployment (+3 -> 3))
Oct 10 09:46:40 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:40 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:40 compute-0 python3[93003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:40 compute-0 sudo[93004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:40 compute-0 sudo[93004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:40 compute-0 sudo[93004]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.379568707 +0000 UTC m=+0.044105984 container create a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:46:40 compute-0 systemd[1]: Started libpod-conmon-a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b.scope.
Oct 10 09:46:40 compute-0 sudo[93042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:40 compute-0 sudo[93042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdff6daa0930b0ab3f91b13b705f384dadee94c32d1a45eacbab2e05bfb5b2e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdff6daa0930b0ab3f91b13b705f384dadee94c32d1a45eacbab2e05bfb5b2e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.359556192 +0000 UTC m=+0.024093489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.471411181 +0000 UTC m=+0.135948448 container init a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.488041381 +0000 UTC m=+0.152578648 container start a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.491496449 +0000 UTC m=+0.156033716 container attach a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 09:46:40 compute-0 systemd[1]: Reloading.
Oct 10 09:46:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 09:46:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 09:46:40 compute-0 systemd-rc-local-generator[93156]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:40 compute-0 systemd-sysv-generator[93159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:40 compute-0 podman[93016]: 2025-10-10 09:46:40.937679243 +0000 UTC m=+0.602216550 container died a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:40 compute-0 ceph-mon[73551]: 5.18 deep-scrub starts
Oct 10 09:46:40 compute-0 ceph-mon[73551]: 5.18 deep-scrub ok
Oct 10 09:46:40 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:40 compute-0 ceph-mon[73551]: pgmap v9: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:40 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:40 compute-0 ceph-mon[73551]: 2.b scrub starts
Oct 10 09:46:40 compute-0 ceph-mon[73551]: 2.b scrub ok
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-0 ceph-mon[73551]: mgrmap e27: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 09:46:40 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 09:46:41 compute-0 systemd[1]: libpod-a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b.scope: Deactivated successfully.
Oct 10 09:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdff6daa0930b0ab3f91b13b705f384dadee94c32d1a45eacbab2e05bfb5b2e9-merged.mount: Deactivated successfully.
Oct 10 09:46:41 compute-0 podman[93016]: 2025-10-10 09:46:41.12960075 +0000 UTC m=+0.794138057 container remove a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b (image=quay.io/ceph/ceph:v19, name=distracted_goldwasser, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:46:41 compute-0 systemd[1]: libpod-conmon-a407cc2d400c3346525917cb45834fff9e667bd1c74aebba51c44b11797d7e1b.scope: Deactivated successfully.
Oct 10 09:46:41 compute-0 sudo[93001]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:41 compute-0 systemd[1]: Reloading.
Oct 10 09:46:41 compute-0 systemd-rc-local-generator[93212]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:41 compute-0 systemd-sysv-generator[93217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:41 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:46:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v11: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:41 compute-0 bash[93272]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 10 09:46:41 compute-0 sudo[93308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyggevgzalrfstzocupoczritkpnuxks ; /usr/bin/python3'
Oct 10 09:46:41 compute-0 sudo[93308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:41 compute-0 ceph-mon[73551]: 6.19 scrub starts
Oct 10 09:46:41 compute-0 ceph-mon[73551]: 6.19 scrub ok
Oct 10 09:46:41 compute-0 ceph-mon[73551]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:41 compute-0 ceph-mon[73551]: 2.1b scrub starts
Oct 10 09:46:41 compute-0 ceph-mon[73551]: 2.1b scrub ok
Oct 10 09:46:42 compute-0 bash[93272]: Getting image source signatures
Oct 10 09:46:42 compute-0 bash[93272]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 10 09:46:42 compute-0 bash[93272]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 10 09:46:42 compute-0 bash[93272]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 10 09:46:42 compute-0 python3[93310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.147082083 +0000 UTC m=+0.072384905 container create adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:46:42 compute-0 systemd[1]: Started libpod-conmon-adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9.scope.
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.113101056 +0000 UTC m=+0.038403958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a880a529151f1ac14cc664438749ed5188fdbbd3318760a78a5e19324b35368/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a880a529151f1ac14cc664438749ed5188fdbbd3318760a78a5e19324b35368/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.244202466 +0000 UTC m=+0.169505288 container init adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.255951339 +0000 UTC m=+0.181254181 container start adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.262474653 +0000 UTC m=+0.187777475 container attach adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:46:42 compute-0 bash[93272]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 10 09:46:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 10 09:46:42 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404388837' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:42 compute-0 sharp_carson[93344]: 
Oct 10 09:46:42 compute-0 sharp_carson[93344]: {"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":71,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1760089555,"num_in_osds":3,"osd_in_since":1760089536,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":163}],"num_pgs":163,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84209664,"bytes_avail":64327716864,"bytes_total":64411926528,"read_bytes_sec":30030,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-10-10T09:46:34:511425+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-10-10T09:46:02.954653+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.rfugxc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.gkrssp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"597e1707-426f-413e-8952-e3a64fc1a519":{"message":"Updating node-exporter deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 10 09:46:42 compute-0 bash[93272]: Writing manifest to image destination
Oct 10 09:46:42 compute-0 systemd[1]: libpod-adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9.scope: Deactivated successfully.
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.71689609 +0000 UTC m=+0.642198892 container died adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:46:42 compute-0 podman[93272]: 2025-10-10 09:46:42.721083134 +0000 UTC m=+1.114494503 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 10 09:46:42 compute-0 podman[93272]: 2025-10-10 09:46:42.740170999 +0000 UTC m=+1.133582348 container create 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a880a529151f1ac14cc664438749ed5188fdbbd3318760a78a5e19324b35368-merged.mount: Deactivated successfully.
Oct 10 09:46:42 compute-0 podman[93312]: 2025-10-10 09:46:42.761394208 +0000 UTC m=+0.686697000 container remove adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9 (image=quay.io/ceph/ceph:v19, name=sharp_carson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:46:42 compute-0 systemd[1]: libpod-conmon-adeacb219752ed15e1d799635facd0b79d318e8aa67036128fd06ac5ffd99ce9.scope: Deactivated successfully.
Oct 10 09:46:42 compute-0 sudo[93308]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6023d9695ffc0cbdb787763d4eda4f0c4a1e9861fca2a7044d2d9c52b518ad5/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:42 compute-0 podman[93272]: 2025-10-10 09:46:42.816449347 +0000 UTC m=+1.209860726 container init 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:42 compute-0 podman[93272]: 2025-10-10 09:46:42.821947466 +0000 UTC m=+1.215358815 container start 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:42 compute-0 bash[93272]: 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.828Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.828Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.830Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.830Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.830Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.830Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=arp
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=bcache
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=bonding
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=cpu
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=dmi
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.831Z caller=node_exporter.go:117 level=info collector=edac
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=entropy
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=filefd
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=netclass
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=netdev
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=netstat
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=nfs
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=nvme
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.832Z caller=node_exporter.go:117 level=info collector=os
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=pressure
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=rapl
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=selinux
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=softnet
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=stat
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=textfile
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=time
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=uname
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=xfs
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.833Z caller=node_exporter.go:117 level=info collector=zfs
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.834Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 10 09:46:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0[93426]: ts=2025-10-10T09:46:42.834Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 10 09:46:42 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:46:42 compute-0 sudo[93042]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 10 09:46:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct 10 09:46:42 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct 10 09:46:42 compute-0 sudo[93458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iukcnbcwcedsgvyixajvpijdlbxejzxx ; /usr/bin/python3'
Oct 10 09:46:42 compute-0 sudo[93458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:42 compute-0 ceph-mon[73551]: 6.1a scrub starts
Oct 10 09:46:42 compute-0 ceph-mon[73551]: 6.1a scrub ok
Oct 10 09:46:42 compute-0 ceph-mon[73551]: pgmap v11: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:42 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1404388837' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:42 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:43 compute-0 python3[93460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.220373931 +0000 UTC m=+0.058737937 container create 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:43 compute-0 systemd[1]: Started libpod-conmon-61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927.scope.
Oct 10 09:46:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885ff6b036678f943147d3e674b8cf5d575aa531cb32ed02223368e5455073c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885ff6b036678f943147d3e674b8cf5d575aa531cb32ed02223368e5455073c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.202056282 +0000 UTC m=+0.040420098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.306860719 +0000 UTC m=+0.145224525 container init 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.314142539 +0000 UTC m=+0.152506325 container start 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.317517975 +0000 UTC m=+0.155881761 container attach 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:43 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:46:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v12: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 09:46:43 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210446203' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 09:46:43 compute-0 jovial_pasteur[93476]: 
Oct 10 09:46:43 compute-0 jovial_pasteur[93476]: {"epoch":3,"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","modified":"2025-10-10T09:45:26.181993Z","created":"2025-10-10T09:43:13.233588Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct 10 09:46:43 compute-0 jovial_pasteur[93476]: dumped monmap epoch 3
Oct 10 09:46:43 compute-0 systemd[1]: libpod-61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927.scope: Deactivated successfully.
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.78063733 +0000 UTC m=+0.619001146 container died 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2885ff6b036678f943147d3e674b8cf5d575aa531cb32ed02223368e5455073c-merged.mount: Deactivated successfully.
Oct 10 09:46:43 compute-0 podman[93461]: 2025-10-10 09:46:43.831455644 +0000 UTC m=+0.669819440 container remove 61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927 (image=quay.io/ceph/ceph:v19, name=jovial_pasteur, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:46:43 compute-0 systemd[1]: libpod-conmon-61a7b1c3d571b1149c52b20b86eb63a0a8635b4023846d28a81a023531419927.scope: Deactivated successfully.
Oct 10 09:46:43 compute-0 sudo[93458]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:43 compute-0 ceph-mon[73551]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 10 09:46:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4210446203' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 09:46:44 compute-0 sudo[93536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdpoiflnccjozhnbmztlwgkyiyvicbmb ; /usr/bin/python3'
Oct 10 09:46:44 compute-0 sudo[93536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:44 compute-0 python3[93538]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:44 compute-0 podman[93539]: 2025-10-10 09:46:44.572289601 +0000 UTC m=+0.048520626 container create 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:44 compute-0 systemd[1]: Started libpod-conmon-9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54.scope.
Oct 10 09:46:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:44 compute-0 podman[93539]: 2025-10-10 09:46:44.550177423 +0000 UTC m=+0.026408488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6764c32945e383095e2f21e0c939937c5256ec4a75656048b4ab5cf452c9a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6764c32945e383095e2f21e0c939937c5256ec4a75656048b4ab5cf452c9a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:44 compute-0 podman[93539]: 2025-10-10 09:46:44.672252942 +0000 UTC m=+0.148484047 container init 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:46:44 compute-0 podman[93539]: 2025-10-10 09:46:44.684961698 +0000 UTC m=+0.161192753 container start 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:46:44 compute-0 podman[93539]: 2025-10-10 09:46:44.689271766 +0000 UTC m=+0.165502821 container attach 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:46:45 compute-0 ceph-mon[73551]: pgmap v12: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct 10 09:46:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1088819812' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 09:46:45 compute-0 cool_yalow[93554]: [client.openstack]
Oct 10 09:46:45 compute-0 cool_yalow[93554]:         key = AQAP1ehoAAAAABAAt8v7pISuvMofUPTRybMptA==
Oct 10 09:46:45 compute-0 cool_yalow[93554]:         caps mgr = "allow *"
Oct 10 09:46:45 compute-0 cool_yalow[93554]:         caps mon = "profile rbd"
Oct 10 09:46:45 compute-0 cool_yalow[93554]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 10 09:46:45 compute-0 systemd[1]: libpod-9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54.scope: Deactivated successfully.
Oct 10 09:46:45 compute-0 podman[93539]: 2025-10-10 09:46:45.155767707 +0000 UTC m=+0.631998762 container died 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6764c32945e383095e2f21e0c939937c5256ec4a75656048b4ab5cf452c9a2-merged.mount: Deactivated successfully.
Oct 10 09:46:45 compute-0 podman[93539]: 2025-10-10 09:46:45.204833492 +0000 UTC m=+0.681064507 container remove 9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54 (image=quay.io/ceph/ceph:v19, name=cool_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:45 compute-0 sudo[93536]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:45 compute-0 systemd[1]: libpod-conmon-9990ce1fc55f53860f4c9c1ee3a6f57f5e7391db93c7e50bf6329ce9985d3d54.scope: Deactivated successfully.
Oct 10 09:46:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v13: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 10 09:46:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 10 09:46:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:45 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct 10 09:46:45 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct 10 09:46:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1088819812' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 09:46:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:46 compute-0 sudo[93736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyaitmkwieaylyozrdjozycqeeoziixc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089606.159013-34101-67987227221639/async_wrapper.py j943126714546 30 /home/zuul/.ansible/tmp/ansible-tmp-1760089606.159013-34101-67987227221639/AnsiballZ_command.py _'
Oct 10 09:46:46 compute-0 sudo[93736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:46 compute-0 ansible-async_wrapper.py[93738]: Invoked with j943126714546 30 /home/zuul/.ansible/tmp/ansible-tmp-1760089606.159013-34101-67987227221639/AnsiballZ_command.py _
Oct 10 09:46:46 compute-0 ansible-async_wrapper.py[93741]: Starting module and watcher
Oct 10 09:46:46 compute-0 ansible-async_wrapper.py[93741]: Start watching 93742 (30)
Oct 10 09:46:46 compute-0 ansible-async_wrapper.py[93742]: Start module (93742)
Oct 10 09:46:46 compute-0 ansible-async_wrapper.py[93738]: Return async_wrapper task started.
Oct 10 09:46:46 compute-0 sudo[93736]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:46 compute-0 python3[93743]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:46 compute-0 podman[93744]: 2025-10-10 09:46:46.986846674 +0000 UTC m=+0.055175224 container create 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:46:47 compute-0 systemd[1]: Started libpod-conmon-094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e.scope.
Oct 10 09:46:47 compute-0 ceph-mon[73551]: pgmap v13: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 10 09:46:47 compute-0 ceph-mon[73551]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 10 09:46:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da49b7dbe235a3eeb2301d6ac9e286e3062f58cd95499f07a65581650cba91c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da49b7dbe235a3eeb2301d6ac9e286e3062f58cd95499f07a65581650cba91c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:46.967151168 +0000 UTC m=+0.035479738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:47.071681136 +0000 UTC m=+0.140009766 container init 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:47.078249262 +0000 UTC m=+0.146577832 container start 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:47.082497497 +0000 UTC m=+0.150826087 container attach 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:46:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:47 compute-0 funny_booth[93759]: 
Oct 10 09:46:47 compute-0 funny_booth[93759]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 09:46:47 compute-0 systemd[1]: libpod-094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e.scope: Deactivated successfully.
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:47.478974575 +0000 UTC m=+0.547303185 container died 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v14: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct 10 09:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-da49b7dbe235a3eeb2301d6ac9e286e3062f58cd95499f07a65581650cba91c3-merged.mount: Deactivated successfully.
Oct 10 09:46:47 compute-0 podman[93744]: 2025-10-10 09:46:47.54145407 +0000 UTC m=+0.609782650 container remove 094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e (image=quay.io/ceph/ceph:v19, name=funny_booth, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:46:47 compute-0 systemd[1]: libpod-conmon-094b77b26d4a5cff47984110e6abfcbfc8c2ddf580f1a96249602d3a5f142b1e.scope: Deactivated successfully.
Oct 10 09:46:47 compute-0 ansible-async_wrapper.py[93742]: Module complete (93742)
Oct 10 09:46:47 compute-0 sudo[93842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzcoecnuqzrabaumxzvjrknydnbietf ; /usr/bin/python3'
Oct 10 09:46:47 compute-0 sudo[93842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:48 compute-0 python3[93844]: ansible-ansible.legacy.async_status Invoked with jid=j943126714546.93738 mode=status _async_dir=/root/.ansible_async
Oct 10 09:46:48 compute-0 sudo[93842]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:48 compute-0 sudo[93891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnhmbysupjlifgqbgnuqxjuikjvhsgij ; /usr/bin/python3'
Oct 10 09:46:48 compute-0 sudo[93891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:48 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 597e1707-426f-413e-8952-e3a64fc1a519 (Updating node-exporter deployment (+3 -> 3))
Oct 10 09:46:48 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 597e1707-426f-413e-8952-e3a64fc1a519 (Updating node-exporter deployment (+3 -> 3)) in 8 seconds
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:48 compute-0 python3[93893]: ansible-ansible.legacy.async_status Invoked with jid=j943126714546.93738 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:48 compute-0 sudo[93891]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:48 compute-0 sudo[93894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:48 compute-0 sudo[93894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:48 compute-0 sudo[93894]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:48 compute-0 sudo[93919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:46:48 compute-0 sudo[93919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:48 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 12 completed events
Oct 10 09:46:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:46:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:48 compute-0 sudo[94015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqbatvyysuivqhzenkttzrxxsirhnhko ; /usr/bin/python3'
Oct 10 09:46:48 compute-0 sudo[94015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:48 compute-0 podman[93991]: 2025-10-10 09:46:48.933745056 +0000 UTC m=+0.063304915 container create 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 09:46:48 compute-0 systemd[1]: Started libpod-conmon-0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8.scope.
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:48.903623834 +0000 UTC m=+0.033183683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:49.035870152 +0000 UTC m=+0.165430001 container init 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:49.048594913 +0000 UTC m=+0.178154742 container start 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:49.052130838 +0000 UTC m=+0.181690687 container attach 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 10 09:46:49 compute-0 fervent_wiles[94027]: 167 167
Oct 10 09:46:49 compute-0 systemd[1]: libpod-0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:49.05625619 +0000 UTC m=+0.185816049 container died 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:49 compute-0 ceph-mon[73551]: pgmap v14: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c343c3f82292c9113c380c12f4be881087ef8e719e052ca6b09db2a5592dd52d-merged.mount: Deactivated successfully.
Oct 10 09:46:49 compute-0 python3[94024]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:49 compute-0 podman[93991]: 2025-10-10 09:46:49.107101232 +0000 UTC m=+0.236661081 container remove 0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wiles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:46:49 compute-0 systemd[1]: libpod-conmon-0c57788222a420a14f82b830a08fb490ac89607ab1e95ba8d705435d476314c8.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.174312312 +0000 UTC m=+0.055192203 container create aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:49 compute-0 systemd[1]: Started libpod-conmon-aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01.scope.
Oct 10 09:46:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f621575bc1922a081e9849642e349ba1dffa485fa7019ab8aab38ba67a23af6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f621575bc1922a081e9849642e349ba1dffa485fa7019ab8aab38ba67a23af6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.150666998 +0000 UTC m=+0.031546909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.256782543 +0000 UTC m=+0.137662454 container init aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.269842506 +0000 UTC m=+0.150722397 container start aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.273098291 +0000 UTC m=+0.153978182 container attach aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.289702827 +0000 UTC m=+0.047670920 container create 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:46:49 compute-0 systemd[1]: Started libpod-conmon-6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50.scope.
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.270570889 +0000 UTC m=+0.028538972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.402557559 +0000 UTC m=+0.160525662 container init 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.411760247 +0000 UTC m=+0.169728340 container start 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.415355903 +0000 UTC m=+0.173324006 container attach 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:46:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v15: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 10 09:46:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14523 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:49 compute-0 upbeat_cray[94066]: 
Oct 10 09:46:49 compute-0 upbeat_cray[94066]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 09:46:49 compute-0 systemd[1]: libpod-aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.658953087 +0000 UTC m=+0.539832998 container died aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f621575bc1922a081e9849642e349ba1dffa485fa7019ab8aab38ba67a23af6-merged.mount: Deactivated successfully.
Oct 10 09:46:49 compute-0 podman[94044]: 2025-10-10 09:46:49.696423146 +0000 UTC m=+0.577303037 container remove aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:49 compute-0 systemd[1]: libpod-conmon-aee52f1f38a879f59b542601577df6734c1e2888b3b97feecd3428dec4881f01.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 sudo[94015]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:49 compute-0 mystifying_joliot[94086]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:46:49 compute-0 mystifying_joliot[94086]: --> All data devices are unavailable
Oct 10 09:46:49 compute-0 systemd[1]: libpod-6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.794845113 +0000 UTC m=+0.552813196 container died 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:46:49 compute-0 podman[94068]: 2025-10-10 09:46:49.842396448 +0000 UTC m=+0.600364531 container remove 6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:49 compute-0 systemd[1]: libpod-conmon-6f3bde5a2d241c7532b571e5528a0be6601f319351785bc060dea0310c5c7a50.scope: Deactivated successfully.
Oct 10 09:46:49 compute-0 sudo[93919]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a838cbf479fd339825a5146fe147c5eca7c947223ea53726b51252c30b3f79c-merged.mount: Deactivated successfully.
Oct 10 09:46:49 compute-0 sudo[94144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:49 compute-0 sudo[94144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:49 compute-0 sudo[94144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:50 compute-0 sudo[94169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:46:50 compute-0 sudo[94169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:50 compute-0 ceph-mon[73551]: pgmap v15: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 10 09:46:50 compute-0 ceph-mon[73551]: from='client.14523 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.396918629 +0000 UTC m=+0.052074531 container create b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:46:50 compute-0 systemd[1]: Started libpod-conmon-b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e.scope.
Oct 10 09:46:50 compute-0 sudo[94274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvubqblchitngrtkqpfefqdzcmdqwuqp ; /usr/bin/python3'
Oct 10 09:46:50 compute-0 sudo[94274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.375210519 +0000 UTC m=+0.030366501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.483430332 +0000 UTC m=+0.138586264 container init b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.496290298 +0000 UTC m=+0.151446200 container start b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.499545873 +0000 UTC m=+0.154701775 container attach b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:46:50 compute-0 festive_galileo[94276]: 167 167
Oct 10 09:46:50 compute-0 systemd[1]: libpod-b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e.scope: Deactivated successfully.
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.501866098 +0000 UTC m=+0.157022040 container died b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-526baa9a38919b548c9c456a4696088a010c23423e7928777946b0a45383e0bd-merged.mount: Deactivated successfully.
Oct 10 09:46:50 compute-0 podman[94234]: 2025-10-10 09:46:50.54871943 +0000 UTC m=+0.203875332 container remove b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:50 compute-0 systemd[1]: libpod-conmon-b717b4f32cc722283ef2d1f1ab80a41627ca2ca52cb8841c1a4a285a5980ae5e.scope: Deactivated successfully.
Oct 10 09:46:50 compute-0 python3[94278]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:50 compute-0 podman[94294]: 2025-10-10 09:46:50.712450895 +0000 UTC m=+0.057547058 container create 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:50 compute-0 podman[94305]: 2025-10-10 09:46:50.742353411 +0000 UTC m=+0.055613616 container create 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:50 compute-0 systemd[1]: Started libpod-conmon-92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25.scope.
Oct 10 09:46:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:50 compute-0 systemd[1]: Started libpod-conmon-32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee.scope.
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67f425e5c4ec2a43a796cf2cb665cdcf560ef0c59a9e6cc66b204b8880eda09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67f425e5c4ec2a43a796cf2cb665cdcf560ef0c59a9e6cc66b204b8880eda09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 podman[94294]: 2025-10-10 09:46:50.694513236 +0000 UTC m=+0.039609419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:50 compute-0 podman[94294]: 2025-10-10 09:46:50.795544198 +0000 UTC m=+0.140640471 container init 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:46:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b847f669194ccacfee33a347acc1632d5fe5871c582f6e118c84dcd0a21ebc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b847f669194ccacfee33a347acc1632d5fe5871c582f6e118c84dcd0a21ebc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b847f669194ccacfee33a347acc1632d5fe5871c582f6e118c84dcd0a21ebc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b847f669194ccacfee33a347acc1632d5fe5871c582f6e118c84dcd0a21ebc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:50 compute-0 podman[94305]: 2025-10-10 09:46:50.711198085 +0000 UTC m=+0.024458340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:50 compute-0 podman[94294]: 2025-10-10 09:46:50.808855127 +0000 UTC m=+0.153951290 container start 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:50 compute-0 podman[94294]: 2025-10-10 09:46:50.813609381 +0000 UTC m=+0.158705544 container attach 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:46:50 compute-0 podman[94305]: 2025-10-10 09:46:50.822053213 +0000 UTC m=+0.135313498 container init 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:50 compute-0 podman[94305]: 2025-10-10 09:46:50.835621901 +0000 UTC m=+0.148882156 container start 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:46:50 compute-0 podman[94305]: 2025-10-10 09:46:50.840429177 +0000 UTC m=+0.153689432 container attach 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:51 compute-0 competent_kalam[94332]: {
Oct 10 09:46:51 compute-0 competent_kalam[94332]:     "0": [
Oct 10 09:46:51 compute-0 competent_kalam[94332]:         {
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "devices": [
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "/dev/loop3"
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             ],
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "lv_name": "ceph_lv0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "lv_size": "21470642176",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "name": "ceph_lv0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "tags": {
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.cluster_name": "ceph",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.crush_device_class": "",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.encrypted": "0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.osd_id": "0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.type": "block",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.vdo": "0",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:                 "ceph.with_tpm": "0"
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             },
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "type": "block",
Oct 10 09:46:51 compute-0 competent_kalam[94332]:             "vg_name": "ceph_vg0"
Oct 10 09:46:51 compute-0 competent_kalam[94332]:         }
Oct 10 09:46:51 compute-0 competent_kalam[94332]:     ]
Oct 10 09:46:51 compute-0 competent_kalam[94332]: }
Oct 10 09:46:51 compute-0 systemd[1]: libpod-32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee.scope: Deactivated successfully.
Oct 10 09:46:51 compute-0 podman[94305]: 2025-10-10 09:46:51.123713852 +0000 UTC m=+0.436974067 container died 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 09:46:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:51 compute-0 youthful_pasteur[94327]: 
Oct 10 09:46:51 compute-0 youthful_pasteur[94327]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 10 09:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b847f669194ccacfee33a347acc1632d5fe5871c582f6e118c84dcd0a21ebc4-merged.mount: Deactivated successfully.
Oct 10 09:46:51 compute-0 systemd[1]: libpod-92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25.scope: Deactivated successfully.
Oct 10 09:46:51 compute-0 podman[94305]: 2025-10-10 09:46:51.195690285 +0000 UTC m=+0.508950500 container remove 32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_kalam, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:46:51 compute-0 podman[94294]: 2025-10-10 09:46:51.19615096 +0000 UTC m=+0.541247123 container died 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:51 compute-0 systemd[1]: libpod-conmon-32103d9097f018c7b272f8250e614c1bffed73728c5dbd781dc2fbbf38997dee.scope: Deactivated successfully.
Oct 10 09:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67f425e5c4ec2a43a796cf2cb665cdcf560ef0c59a9e6cc66b204b8880eda09-merged.mount: Deactivated successfully.
Oct 10 09:46:51 compute-0 sudo[94169]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:51 compute-0 podman[94294]: 2025-10-10 09:46:51.24168948 +0000 UTC m=+0.586785643 container remove 92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25 (image=quay.io/ceph/ceph:v19, name=youthful_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:51 compute-0 systemd[1]: libpod-conmon-92ce22e4281703b8b5c442f9fe45539094047e5bc76d91573d152c3848df8d25.scope: Deactivated successfully.
Oct 10 09:46:51 compute-0 sudo[94274]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:51 compute-0 sudo[94388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:51 compute-0 sudo[94388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:51 compute-0 sudo[94388]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:51 compute-0 sudo[94413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:46:51 compute-0 sudo[94413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v16: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:51 compute-0 ansible-async_wrapper.py[93741]: Done in kid B.
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.804784147 +0000 UTC m=+0.055642668 container create 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 09:46:51 compute-0 systemd[1]: Started libpod-conmon-031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778.scope.
Oct 10 09:46:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.877186105 +0000 UTC m=+0.128044655 container init 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.786437936 +0000 UTC m=+0.037296466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.884532162 +0000 UTC m=+0.135390672 container start 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.888525931 +0000 UTC m=+0.139384461 container attach 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:51 compute-0 clever_joliot[94493]: 167 167
Oct 10 09:46:51 compute-0 systemd[1]: libpod-031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778.scope: Deactivated successfully.
Oct 10 09:46:51 compute-0 conmon[94493]: conmon 031287c293a3bcf77a30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778.scope/container/memory.events
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.89281445 +0000 UTC m=+0.143672970 container died 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9922cd1cc0dd60bc8aac3591929006be11067a073d5465dd015269fd9585895e-merged.mount: Deactivated successfully.
Oct 10 09:46:51 compute-0 podman[94476]: 2025-10-10 09:46:51.931008872 +0000 UTC m=+0.181867382 container remove 031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:46:51 compute-0 systemd[1]: libpod-conmon-031287c293a3bcf77a3071ee14bea03c5ef2bee195ffbf42d8f89f2aa8469778.scope: Deactivated successfully.
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.087099711 +0000 UTC m=+0.043084631 container create 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:52 compute-0 systemd[1]: Started libpod-conmon-4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8.scope.
Oct 10 09:46:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7797b6c3df643e257496ff0b2459e616874c960e1a08cdbf50e285d8b2e65f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7797b6c3df643e257496ff0b2459e616874c960e1a08cdbf50e285d8b2e65f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7797b6c3df643e257496ff0b2459e616874c960e1a08cdbf50e285d8b2e65f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7797b6c3df643e257496ff0b2459e616874c960e1a08cdbf50e285d8b2e65f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 sudo[94558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsaqqokergvipopypokvjamxchishea ; /usr/bin/python3'
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.068950086 +0000 UTC m=+0.024934996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:52 compute-0 sudo[94558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.168823679 +0000 UTC m=+0.124808659 container init 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.181456388 +0000 UTC m=+0.137441308 container start 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.185232509 +0000 UTC m=+0.141217429 container attach 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:52 compute-0 python3[94560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.379894464 +0000 UTC m=+0.062583402 container create 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:52 compute-0 systemd[1]: Started libpod-conmon-70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e.scope.
Oct 10 09:46:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.355882478 +0000 UTC m=+0.038571496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e45fd1551a4974a4dc80fe67f9a8409070ff64c085f1eb4a3653037a6d959/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e45fd1551a4974a4dc80fe67f9a8409070ff64c085f1eb4a3653037a6d959/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.46374013 +0000 UTC m=+0.146429048 container init 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.470027503 +0000 UTC m=+0.152716441 container start 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.473757743 +0000 UTC m=+0.156446671 container attach 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:52 compute-0 ceph-mon[73551]: from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:52 compute-0 ceph-mon[73551]: pgmap v16: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:52 compute-0 lvm[94670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:46:52 compute-0 lvm[94670]: VG ceph_vg0 finished
Oct 10 09:46:52 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:52 compute-0 infallible_leavitt[94590]: 
Oct 10 09:46:52 compute-0 infallible_leavitt[94590]: [{"container_id": "b09e35c74660", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.13%", "created": "2025-10-10T09:43:58.961941Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-10T09:46:35.638381Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-10-10T09:43:58.849337Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@crash.compute-0", "version": "19.2.3"}, {"container_id": "8a2c16c69263", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.63%", "created": "2025-10-10T09:44:38.113929Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-10T09:46:35.753982Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-10T09:44:37.837741Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@crash.compute-1", "version": "19.2.3"}, {"container_id": "e6626ca9d8bc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.33%", "created": "2025-10-10T09:45:34.956127Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-10T09:46:35.330731Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-10-10T09:45:34.820764Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@crash.compute-2", "version": "19.2.3"}, {"container_id": "8d50af9bcf40", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.92%", "created": "2025-10-10T09:43:21.666657Z", "daemon_id": "compute-0.xkdepb", "daemon_name": "mgr.compute-0.xkdepb", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-10T09:46:35.638257Z", "memory_usage": 541799219, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-10T09:43:21.547524Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mgr.compute-0.xkdepb", "version": "19.2.3"}, {"container_id": "90ca3b90e3af", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "39.09%", "created": "2025-10-10T09:45:33.175350Z", "daemon_id": "compute-1.rfugxc", "daemon_name": "mgr.compute-1.rfugxc", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-10T09:46:35.754293Z", "memory_usage": 505308774, "ports": [8765], "service_name": "mgr", "started": "2025-10-10T09:45:33.049695Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mgr.compute-1.rfugxc", "version": "19.2.3"}, {"container_id": "04def5c47018", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "37.04%", "created": "2025-10-10T09:45:26.552855Z", "daemon_id": "compute-2.gkrssp", "daemon_name": "mgr.compute-2.gkrssp", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-10T09:46:35.330653Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2025-10-10T09:45:26.447296Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mgr.compute-2.gkrssp", "version": "19.2.3"}, {"container_id": "2dc12dfc8143", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.88%", "created": "2025-10-10T09:43:15.637450Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-10T09:46:35.638137Z", "memory_request": 2147483648, "memory_usage": 60418949, "ports": [], "service_name": "mon", "started": "2025-10-10T09:43:18.116238Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mon.compute-0", "version": "19.2.3"}, {"container_id": "ecb3fdbc3181", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.28%", "created": "2025-10-10T09:45:22.039066Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-10T09:46:35.754210Z", "memory_request": 2147483648, "memory_usage": 51474595, "ports": [], "service_name": "mon", "started": "2025-10-10T09:45:21.911911Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mon.compute-1", "version": "19.2.3"}, {"container_id": "bb439f1f2eff", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.43%", "created": "2025-10-10T09:45:19.796637Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-10T09:46:35.330512Z", "memory_request": 2147483648, "memory_usage": 49125785, "ports": [], "service_name": "mon", "started": "2025-10-10T09:45:19.690822Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-10-10T09:46:42.909160Z daemon:node-exporter.compute-0 [INFO] \"Deployed node-exporter.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-10-10T09:46:45.823859Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-10-10T09:46:48.339595Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "202b142a1e8e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.83%", "created": "2025-10-10T09:44:50.021696Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-10T09:46:35.638471Z", "memory_request": 4294967296, "memory_usage": 72341258, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-10T09:44:49.881290Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@osd.0", "version": "19.2.3"}, {"container_id": "71f3fc600b79", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.99%", "created": "2025-10-10T09:44:51.846293Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-10T09:46:35.754126Z", "memory_request": 5502952652, "memory_usage": 65787658, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-10T09:44:51.693610Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@osd.1", "version": "19.2.3"}, {"container_id": "0aa08009f7f5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "80.24%", "created": "2025-10-10T09:45:46.861512Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-10T09:46:35.330806Z", "memory_request": 4294967296, "memory_usage": 62243471, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-10T09:45:46.773468Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@osd.2", "version": "19.2.3"}]
Oct 10 09:46:52 compute-0 happy_heyrovsky[94544]: {}
Oct 10 09:46:52 compute-0 systemd[1]: libpod-70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e.scope: Deactivated successfully.
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.851050054 +0000 UTC m=+0.533738982 container died 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7e45fd1551a4974a4dc80fe67f9a8409070ff64c085f1eb4a3653037a6d959-merged.mount: Deactivated successfully.
Oct 10 09:46:52 compute-0 systemd[1]: libpod-4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8.scope: Deactivated successfully.
Oct 10 09:46:52 compute-0 systemd[1]: libpod-4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8.scope: Consumed 1.138s CPU time.
Oct 10 09:46:52 compute-0 podman[94563]: 2025-10-10 09:46:52.900262332 +0000 UTC m=+0.582951280 container remove 70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e (image=quay.io/ceph/ceph:v19, name=infallible_leavitt, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.903859298 +0000 UTC m=+0.859844178 container died 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:46:52 compute-0 systemd[1]: libpod-conmon-70d75edfc82f4915e8a536359f0e79d7c1c6670462699908e2965453c1bd640e.scope: Deactivated successfully.
Oct 10 09:46:52 compute-0 sudo[94558]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7797b6c3df643e257496ff0b2459e616874c960e1a08cdbf50e285d8b2e65f3-merged.mount: Deactivated successfully.
Oct 10 09:46:52 compute-0 podman[94516]: 2025-10-10 09:46:52.972855365 +0000 UTC m=+0.928840245 container remove 4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:46:52 compute-0 systemd[1]: libpod-conmon-4078a51d061957bb04ed0d783994c1c42a7ff1620b9d1a769a8d88e6f319fcf8.scope: Deactivated successfully.
Oct 10 09:46:53 compute-0 sudo[94413]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:53 compute-0 rsyslogd[1006]: message too long (11754) with configured size 8096, begin of message is: [{"container_id": "b09e35c74660", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:53 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 75bbca0b-442c-4f3e-adf5-b972cab84341 (Updating rgw.rgw deployment (+3 -> 3))
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:53 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.qujzwn on compute-2
Oct 10 09:46:53 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.qujzwn on compute-2
Oct 10 09:46:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v17: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:53 compute-0 sudo[94720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbpiftyvklvtcvnznzwmxpqijwjfsixd ; /usr/bin/python3'
Oct 10 09:46:53 compute-0 sudo[94720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:53 compute-0 python3[94722]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.001065268 +0000 UTC m=+0.056113943 container create a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:54 compute-0 systemd[1]: Started libpod-conmon-a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4.scope.
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:53.971563335 +0000 UTC m=+0.026612020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cbcf8bc6da6734ebee7b0d0d753cff3f7420b297bf545a164491be9a3b62dc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cbcf8bc6da6734ebee7b0d0d753cff3f7420b297bf545a164491be9a3b62dc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.103815735 +0000 UTC m=+0.158864420 container init a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.111660928 +0000 UTC m=+0.166709553 container start a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.11514644 +0000 UTC m=+0.170195125 container attach a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117532342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:54 compute-0 pensive_wilbur[94738]: 
Oct 10 09:46:54 compute-0 pensive_wilbur[94738]: {"fsid":"21f084a3-af34-5230-afe4-ea5cd24a55f4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":83,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1760089555,"num_in_osds":3,"osd_in_since":1760089536,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":163}],"num_pgs":163,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84238336,"bytes_avail":64327688192,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-10-10T09:46:34:511425+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-10-10T09:46:02.954653+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.rfugxc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.gkrssp":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"75bbca0b-442c-4f3e-adf5-b972cab84341":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 10 09:46:54 compute-0 systemd[1]: libpod-a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4.scope: Deactivated successfully.
Oct 10 09:46:54 compute-0 conmon[94738]: conmon a4ef2c8143f110c61da7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4.scope/container/memory.events
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.532152472 +0000 UTC m=+0.587201087 container died a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 09:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cbcf8bc6da6734ebee7b0d0d753cff3f7420b297bf545a164491be9a3b62dc0-merged.mount: Deactivated successfully.
Oct 10 09:46:54 compute-0 podman[94723]: 2025-10-10 09:46:54.578999714 +0000 UTC m=+0.634048339 container remove a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4 (image=quay.io/ceph/ceph:v19, name=pensive_wilbur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:54 compute-0 systemd[1]: libpod-conmon-a4ef2c8143f110c61da71ac56c1fb82547db1ab99d1e2a477b22b4b7b5f7afe4.scope: Deactivated successfully.
Oct 10 09:46:54 compute-0 sudo[94720]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:54 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.zajetc on compute-1
Oct 10 09:46:54 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.zajetc on compute-1
Oct 10 09:46:55 compute-0 ceph-mon[73551]: Deploying daemon rgw.rgw.compute-2.qujzwn on compute-2
Oct 10 09:46:55 compute-0 ceph-mon[73551]: pgmap v17: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/117532342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 10 09:46:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 10 09:46:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 10 09:46:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct 10 09:46:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 09:46:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 43 pg[9.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:46:55 compute-0 sudo[94801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzfjwzkczhlpnokhsismtkvkwdgqajd ; /usr/bin/python3'
Oct 10 09:46:55 compute-0 sudo[94801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v19: 164 pgs: 1 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:55 compute-0 python3[94803]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:55 compute-0 podman[94804]: 2025-10-10 09:46:55.682517928 +0000 UTC m=+0.075991414 container create 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:46:55 compute-0 systemd[1]: Started libpod-conmon-95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a.scope.
Oct 10 09:46:55 compute-0 podman[94804]: 2025-10-10 09:46:55.651242808 +0000 UTC m=+0.044716344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdb2eb6097c8f3621e80270f4999d74e76b603af76b13ff0449fba1cfb7c350/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdb2eb6097c8f3621e80270f4999d74e76b603af76b13ff0449fba1cfb7c350/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:55 compute-0 podman[94804]: 2025-10-10 09:46:55.769259448 +0000 UTC m=+0.162732914 container init 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:55 compute-0 podman[94804]: 2025-10-10 09:46:55.777111421 +0000 UTC m=+0.170584857 container start 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:46:55 compute-0 podman[94804]: 2025-10-10 09:46:55.781163123 +0000 UTC m=+0.174636559 container attach 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: Deploying daemon rgw.rgw.compute-1.zajetc on compute-1
Oct 10 09:46:56 compute-0 ceph-mon[73551]: osdmap e43: 3 total, 3 up, 3 in
Oct 10 09:46:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2866042771' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 09:46:56 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 09:46:56 compute-0 ceph-mon[73551]: pgmap v19: 164 pgs: 1 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039652738' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:46:56 compute-0 serene_varahamihira[94819]: 
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 10 09:46:56 compute-0 systemd[1]: libpod-95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a.scope: Deactivated successfully.
Oct 10 09:46:56 compute-0 serene_varahamihira[94819]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.xkdepb/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.rfugxc/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.gkrssp/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502952652","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.zajetc","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.qujzwn","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 10 09:46:56 compute-0 podman[94804]: 2025-10-10 09:46:56.139514131 +0000 UTC m=+0.532987607 container died 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:56 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 44 pg[9.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcdb2eb6097c8f3621e80270f4999d74e76b603af76b13ff0449fba1cfb7c350-merged.mount: Deactivated successfully.
Oct 10 09:46:56 compute-0 podman[94804]: 2025-10-10 09:46:56.184368039 +0000 UTC m=+0.577841475 container remove 95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a (image=quay.io/ceph/ceph:v19, name=serene_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:46:56 compute-0 systemd[1]: libpod-conmon-95664a0d432c2e0fa44052ea70eb5eb0ab662c4de3261a76bd249600a311cf3a.scope: Deactivated successfully.
Oct 10 09:46:56 compute-0 sudo[94801]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:56 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.myiozw on compute-0
Oct 10 09:46:56 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.myiozw on compute-0
Oct 10 09:46:56 compute-0 sudo[94858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:56 compute-0 sudo[94858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:56 compute-0 sudo[94858]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:56 compute-0 sudo[94885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:56 compute-0 sudo[94885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:56 compute-0 sudo[94933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxfqgfgisnzkrvgnoetjzgostpzpdnm ; /usr/bin/python3'
Oct 10 09:46:56 compute-0 sudo[94933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:57 compute-0 python3[94935]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:57 compute-0 ceph-mon[73551]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4039652738' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 09:46:57 compute-0 ceph-mon[73551]: osdmap e44: 3 total, 3 up, 3 in
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:57 compute-0 ceph-mon[73551]: Deploying daemon rgw.rgw.compute-0.myiozw on compute-0
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.132187226 +0000 UTC m=+0.056232787 container create a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:46:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 10 09:46:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 10 09:46:57 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 10 09:46:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 10 09:46:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 10 09:46:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:57 compute-0 systemd[1]: Started libpod-conmon-a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37.scope.
Oct 10 09:46:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261397d50984b61e8e91cd02d8c06250f336b8350971a09ea5f26fababead6d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261397d50984b61e8e91cd02d8c06250f336b8350971a09ea5f26fababead6d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.107000632 +0000 UTC m=+0.031046183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.221563031 +0000 UTC m=+0.145608572 container init a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.227038558 +0000 UTC m=+0.151084079 container start a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.230469138 +0000 UTC m=+0.154514679 container attach a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.278185259 +0000 UTC m=+0.048171587 container create 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 09:46:57 compute-0 systemd[1]: Started libpod-conmon-98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8.scope.
Oct 10 09:46:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.347461375 +0000 UTC m=+0.117447703 container init 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.25841213 +0000 UTC m=+0.028398508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.354070498 +0000 UTC m=+0.124056836 container start 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.357392646 +0000 UTC m=+0.127378994 container attach 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:46:57 compute-0 eloquent_torvalds[95015]: 167 167
Oct 10 09:46:57 compute-0 systemd[1]: libpod-98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8.scope: Deactivated successfully.
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.360521827 +0000 UTC m=+0.130508195 container died 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-357917968ebbbc9fafc8fd0c621c43087d180806925c44bb392c45e7a87a8f3a-merged.mount: Deactivated successfully.
Oct 10 09:46:57 compute-0 podman[94997]: 2025-10-10 09:46:57.405458638 +0000 UTC m=+0.175444956 container remove 98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:57 compute-0 systemd[1]: libpod-conmon-98c0d2e5acfa3ca1c18a8dc70d9a79d3dc9808764d8963c02450058f63162cf8.scope: Deactivated successfully.
Oct 10 09:46:57 compute-0 systemd[1]: Reloading.
Oct 10 09:46:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v22: 165 pgs: 2 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:57 compute-0 systemd-rc-local-generator[95073]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:57 compute-0 systemd-sysv-generator[95077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct 10 09:46:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897202661' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 10 09:46:57 compute-0 festive_clarke[94982]: mimic
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.726321625 +0000 UTC m=+0.650367146 container died a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:57 compute-0 systemd[1]: libpod-a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37.scope: Deactivated successfully.
Oct 10 09:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-261397d50984b61e8e91cd02d8c06250f336b8350971a09ea5f26fababead6d0-merged.mount: Deactivated successfully.
Oct 10 09:46:57 compute-0 podman[94949]: 2025-10-10 09:46:57.777921622 +0000 UTC m=+0.701967153 container remove a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37 (image=quay.io/ceph/ceph:v19, name=festive_clarke, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 09:46:57 compute-0 systemd[1]: Reloading.
Oct 10 09:46:57 compute-0 sudo[94933]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:57 compute-0 systemd-rc-local-generator[95137]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:57 compute-0 systemd-sysv-generator[95141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:58 compute-0 systemd[1]: libpod-conmon-a48f3609266ede608fafadd8a71eef34228ca841e98af42c366327bd829ede37.scope: Deactivated successfully.
Oct 10 09:46:58 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.myiozw for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 10 09:46:58 compute-0 ceph-mon[73551]: osdmap e45: 3 total, 3 up, 3 in
Oct 10 09:46:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: pgmap v22: 165 pgs: 2 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/897202661' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:58 compute-0 podman[95198]: 2025-10-10 09:46:58.428940587 +0000 UTC m=+0.072834722 container create 55dae5966b11bbe29d08eaf447b20825914878f43af9bfccddad42e1c58c48f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-0-myiozw, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:58 compute-0 podman[95198]: 2025-10-10 09:46:58.403792975 +0000 UTC m=+0.047687160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3872f380f8e636beb2bba7706012cd9d818d365fe2850c4b6e81c821b59199b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3872f380f8e636beb2bba7706012cd9d818d365fe2850c4b6e81c821b59199b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3872f380f8e636beb2bba7706012cd9d818d365fe2850c4b6e81c821b59199b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3872f380f8e636beb2bba7706012cd9d818d365fe2850c4b6e81c821b59199b3/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.myiozw supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:58 compute-0 podman[95198]: 2025-10-10 09:46:58.534576797 +0000 UTC m=+0.178470982 container init 55dae5966b11bbe29d08eaf447b20825914878f43af9bfccddad42e1c58c48f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-0-myiozw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:46:58 compute-0 podman[95198]: 2025-10-10 09:46:58.540426136 +0000 UTC m=+0.184320271 container start 55dae5966b11bbe29d08eaf447b20825914878f43af9bfccddad42e1c58c48f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-0-myiozw, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:58 compute-0 bash[95198]: 55dae5966b11bbe29d08eaf447b20825914878f43af9bfccddad42e1c58c48f0
Oct 10 09:46:58 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.myiozw for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Oct 10 09:46:58 compute-0 radosgw[95218]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:46:58 compute-0 radosgw[95218]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 10 09:46:58 compute-0 radosgw[95218]: framework: beast
Oct 10 09:46:58 compute-0 radosgw[95218]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 10 09:46:58 compute-0 radosgw[95218]: init_numa not setting numa affinity
Oct 10 09:46:58 compute-0 sudo[94885]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 75bbca0b-442c-4f3e-adf5-b972cab84341 (Updating rgw.rgw deployment (+3 -> 3))
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 75bbca0b-442c-4f3e-adf5-b972cab84341 (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 7c1816d9-c8e4-4601-ae43-6377d8295f0a (Updating mds.cephfs deployment (+3 -> 3))
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:46:58 compute-0 sudo[95785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmwnzeiqjxqhbyjrzlzmlsmiltsplpxi ; /usr/bin/python3'
Oct 10 09:46:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:46:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.vlgajy on compute-2
Oct 10 09:46:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.vlgajy on compute-2
Oct 10 09:46:58 compute-0 sudo[95785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:58 compute-0 python3[95830]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:58 compute-0 podman[95833]: 2025-10-10 09:46:58.952353924 +0000 UTC m=+0.061154845 container create 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:46:59 compute-0 systemd[1]: Started libpod-conmon-1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb.scope.
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:58.925532938 +0000 UTC m=+0.034333959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:46:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5043b38cd25834c3ef5dd0a949b57ba495d4745eff8e0c8d20818524402c178/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5043b38cd25834c3ef5dd0a949b57ba495d4745eff8e0c8d20818524402c178/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:59.061431545 +0000 UTC m=+0.170232556 container init 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:59.077411481 +0000 UTC m=+0.186212442 container start 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:59.082452264 +0000 UTC m=+0.191253225 container attach 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 10 09:46:59 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 10 09:46:59 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 47 pg[11.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 10 09:46:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 10 09:46:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 10 09:46:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:59 compute-0 ceph-mon[73551]: osdmap e46: 3 total, 3 up, 3 in
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-0 ceph-mon[73551]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:46:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:59 compute-0 ceph-mon[73551]: Deploying daemon mds.cephfs.compute-2.vlgajy on compute-2
Oct 10 09:46:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v25: 166 pgs: 3 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct 10 09:46:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100066023' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 10 09:46:59 compute-0 relaxed_stonebraker[95849]: 
Oct 10 09:46:59 compute-0 systemd[1]: libpod-1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb.scope: Deactivated successfully.
Oct 10 09:46:59 compute-0 relaxed_stonebraker[95849]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:59.549665586 +0000 UTC m=+0.658466507 container died 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5043b38cd25834c3ef5dd0a949b57ba495d4745eff8e0c8d20818524402c178-merged.mount: Deactivated successfully.
Oct 10 09:46:59 compute-0 podman[95833]: 2025-10-10 09:46:59.600706094 +0000 UTC m=+0.709507015 container remove 1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb (image=quay.io/ceph/ceph:v19, name=relaxed_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:59 compute-0 systemd[1]: libpod-conmon-1cb4df797ca136553c35938db39210d74a02e2b0e84a988cb22184cad5b6ddcb.scope: Deactivated successfully.
Oct 10 09:46:59 compute-0 sudo[95785]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:00 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 48 pg[11.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e3 new map
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-10-10T09:47:00:211513+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:46:34.511367+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.vlgajy{-1:24337} state up:standby seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:00 compute-0 ceph-mon[73551]: osdmap e47: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: pgmap v25: 166 pgs: 3 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4100066023' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: osdmap e48: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:boot
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] as mds.0
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vlgajy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"} v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e3 all = 0
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e4 new map
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-10-10T09:47:00:244509+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:00.244232+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:creating seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:creating}
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vlgajy is now active in filesystem cephfs as rank 0
Oct 10 09:47:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.cchwlo on compute-0
Oct 10 09:47:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.cchwlo on compute-0
Oct 10 09:47:00 compute-0 sudo[95884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:00 compute-0 sudo[95884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:00 compute-0 sudo[95884]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:00 compute-0 sudo[95909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:00 compute-0 sudo[95909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.863716216 +0000 UTC m=+0.044706945 container create 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:47:00 compute-0 systemd[1]: Started libpod-conmon-85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a.scope.
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.837769079 +0000 UTC m=+0.018759788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.959887831 +0000 UTC m=+0.140878520 container init 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.971377652 +0000 UTC m=+0.152368371 container start 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.975571797 +0000 UTC m=+0.156562516 container attach 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:47:00 compute-0 friendly_euclid[95999]: 167 167
Oct 10 09:47:00 compute-0 systemd[1]: libpod-85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a.scope: Deactivated successfully.
Oct 10 09:47:00 compute-0 podman[95983]: 2025-10-10 09:47:00.977699886 +0000 UTC m=+0.158690625 container died 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05d08e131e3d3000ef1ef9fad708ab27c2148238f0cea9d33c0283a88d75d60-merged.mount: Deactivated successfully.
Oct 10 09:47:01 compute-0 podman[95983]: 2025-10-10 09:47:01.025921743 +0000 UTC m=+0.206912472 container remove 85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:47:01 compute-0 systemd[1]: libpod-conmon-85a8a70f40443a87dd3764100b5793b942d7c4a2d62b8524150ea3c230caf79a.scope: Deactivated successfully.
Oct 10 09:47:01 compute-0 systemd[1]: Reloading.
Oct 10 09:47:01 compute-0 systemd-rc-local-generator[96038]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:01 compute-0 systemd-sysv-generator[96042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:boot
Oct 10 09:47:01 compute-0 ceph-mon[73551]: daemon mds.cephfs.compute-2.vlgajy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 09:47:01 compute-0 ceph-mon[73551]: fsmap cephfs:0 1 up:standby
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:creating}
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:01 compute-0 ceph-mon[73551]: daemon mds.cephfs.compute-2.vlgajy is now active in filesystem cephfs as rank 0
Oct 10 09:47:01 compute-0 ceph-mon[73551]: Deploying daemon mds.cephfs.compute-0.cchwlo on compute-0
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e5 new map
Oct 10 09:47:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-10-10T09:47:01:287113+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active}
Oct 10 09:47:01 compute-0 systemd[1]: Reloading.
Oct 10 09:47:01 compute-0 systemd-rc-local-generator[96077]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:01 compute-0 systemd-sysv-generator[96081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v28: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:01 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.cchwlo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:01 compute-0 podman[96139]: 2025-10-10 09:47:01.984394443 +0000 UTC m=+0.054359985 container create 3ff91d629822ca596e2166817c7b6893b2b30a8f4b8b4a1ffc036ccafe8fa7b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-0-cchwlo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024d1f853c7caac6f676577ab0dfbc28cc58bae9033f05d9df0085965070515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024d1f853c7caac6f676577ab0dfbc28cc58bae9033f05d9df0085965070515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024d1f853c7caac6f676577ab0dfbc28cc58bae9033f05d9df0085965070515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024d1f853c7caac6f676577ab0dfbc28cc58bae9033f05d9df0085965070515/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.cchwlo supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:02 compute-0 podman[96139]: 2025-10-10 09:47:01.956545724 +0000 UTC m=+0.026511326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:02 compute-0 podman[96139]: 2025-10-10 09:47:02.060624805 +0000 UTC m=+0.130590327 container init 3ff91d629822ca596e2166817c7b6893b2b30a8f4b8b4a1ffc036ccafe8fa7b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-0-cchwlo, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:47:02 compute-0 podman[96139]: 2025-10-10 09:47:02.070000258 +0000 UTC m=+0.139965770 container start 3ff91d629822ca596e2166817c7b6893b2b30a8f4b8b4a1ffc036ccafe8fa7b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-0-cchwlo, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:47:02 compute-0 bash[96139]: 3ff91d629822ca596e2166817c7b6893b2b30a8f4b8b4a1ffc036ccafe8fa7b5
Oct 10 09:47:02 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.cchwlo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:02 compute-0 ceph-mds[96159]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:47:02 compute-0 ceph-mds[96159]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 10 09:47:02 compute-0 ceph-mds[96159]: main not setting numa affinity
Oct 10 09:47:02 compute-0 ceph-mds[96159]: pidfile_write: ignore empty --pid-file
Oct 10 09:47:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-0-cchwlo[96155]: starting mds.cephfs.compute-0.cchwlo at 
Oct 10 09:47:02 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Updating MDS map to version 5 from mon.0
Oct 10 09:47:02 compute-0 sudo[95909]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.fhagzt on compute-1
Oct 10 09:47:02 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.fhagzt on compute-1
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: osdmap e49: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:02 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active}
Oct 10 09:47:02 compute-0 ceph-mon[73551]: pgmap v28: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-0 ceph-mon[73551]: osdmap e50: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e6 new map
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-10-10T09:47:02:297566+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:02 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Updating MDS map to version 6 from mon.0
Oct 10 09:47:02 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Monitors have assigned me to become a standby
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:boot
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"} v 0)
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e6 all = 0
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e7 new map
Oct 10 09:47:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-10-10T09:47:02:322797+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 10 09:47:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 10 09:47:03 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 10 09:47:03 compute-0 ceph-mon[73551]: Deploying daemon mds.cephfs.compute-1.fhagzt on compute-1
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:boot
Oct 10 09:47:03 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:47:03 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-0 ceph-mon[73551]: osdmap e51: 3 total, 3 up, 3 in
Oct 10 09:47:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:03 compute-0 radosgw[95218]: v1 topic migration: starting v1 topic migration..
Oct 10 09:47:03 compute-0 radosgw[95218]: LDAP not started since no server URIs were provided in the configuration.
Oct 10 09:47:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-0-myiozw[95214]: 2025-10-10T09:47:03.488+0000 7f97efe93980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 10 09:47:03 compute-0 radosgw[95218]: v1 topic migration: finished v1 topic migration
Oct 10 09:47:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v31: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:03 compute-0 radosgw[95218]: framework: beast
Oct 10 09:47:03 compute-0 radosgw[95218]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 10 09:47:03 compute-0 radosgw[95218]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 10 09:47:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-0 radosgw[95218]: starting handler: beast
Oct 10 09:47:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-0 radosgw[95218]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:47:03 compute-0 radosgw[95218]: mgrc service_daemon_register rgw.14580 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.myiozw,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864352,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=ac475a20-bf0e-4531-bd8b-a44afde7c93f,zone_name=default,zonegroup_id=8929b431-04ce-48e1-bb4a-cedab812d97d,zonegroup_name=default}
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 13 completed events
Oct 10 09:47:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 7c1816d9-c8e4-4601-ae43-6377d8295f0a (Updating mds.cephfs deployment (+3 -> 3))
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 7c1816d9-c8e4-4601-ae43-6377d8295f0a (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 6add9727-89f1-4824-a2a1-362dd0041280 (Updating nfs.cephfs deployment (+3 -> 3))
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx-rgw
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx-rgw
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.mssvzx's ganesha conf is defaulting to empty
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.mssvzx's ganesha conf is defaulting to empty
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.mssvzx on compute-1
Oct 10 09:47:04 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.mssvzx on compute-1
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e8 new map
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-10-10T09:47:04:615775+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:boot
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"} v 0)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e8 all = 0
Oct 10 09:47:04 compute-0 ceph-mon[73551]: pgmap v31: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:47:04 compute-0 ceph-mon[73551]: Cluster is now healthy
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v32: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 9.4 KiB/s wr, 445 op/s
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx-rgw
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Bind address in nfs.cephfs.0.0.compute-1.mssvzx's ganesha conf is defaulting to empty
Oct 10 09:47:05 compute-0 ceph-mon[73551]: Deploying daemon nfs.cephfs.0.0.compute-1.mssvzx on compute-1
Oct 10 09:47:05 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:boot
Oct 10 09:47:05 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:05 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:05 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:47:06 compute-0 ceph-mon[73551]: pgmap v32: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 9.4 KiB/s wr, 445 op/s
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e9 new map
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-10-10T09:47:06:672904+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:06 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Updating MDS map to version 9 from mon.0
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:standby
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:06 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.boccfy
Oct 10 09:47:06 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.boccfy
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:06 compute-0 ceph-mgr[73845]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 10 09:47:06 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v33: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 8.0 KiB/s wr, 376 op/s
Oct 10 09:47:07 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:standby
Oct 10 09:47:07 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:07 compute-0 ceph-mon[73551]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:08 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 14 completed events
Oct 10 09:47:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:08 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 1f3ee081-1a7a-411c-ab9b-d037398029c1 (Global Recovery Event) in 10 seconds
Oct 10 09:47:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 new map
Oct 10 09:47:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-10-10T09:47:08:789045+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:08 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:standby
Oct 10 09:47:08 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:08 compute-0 ceph-mon[73551]: pgmap v33: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 8.0 KiB/s wr, 376 op/s
Oct 10 09:47:08 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v34: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.2 KiB/s wr, 295 op/s
Oct 10 09:47:09 compute-0 ceph-mon[73551]: mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:standby
Oct 10 09:47:09 compute-0 ceph-mon[73551]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 10 09:47:09 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:09 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:09 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:09 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:09 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.boccfy-rgw
Oct 10 09:47:09 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.boccfy-rgw
Oct 10 09:47:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:47:09 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:10 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.boccfy's ganesha conf is defaulting to empty
Oct 10 09:47:10 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.boccfy's ganesha conf is defaulting to empty
Oct 10 09:47:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:10 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.boccfy on compute-2
Oct 10 09:47:10 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.boccfy on compute-2
Oct 10 09:47:10 compute-0 ceph-mon[73551]: pgmap v34: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.2 KiB/s wr, 295 op/s
Oct 10 09:47:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:10 compute-0 ceph-mon[73551]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:10 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy-rgw
Oct 10 09:47:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:10 compute-0 ceph-mon[73551]: Bind address in nfs.cephfs.1.0.compute-2.boccfy's ganesha conf is defaulting to empty
Oct 10 09:47:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:10 compute-0 ceph-mon[73551]: Deploying daemon nfs.cephfs.1.0.compute-2.boccfy on compute-2
Oct 10 09:47:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v35: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 6.1 KiB/s wr, 299 op/s
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:11 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo
Oct 10 09:47:11 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:11 compute-0 ceph-mgr[73845]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 10 09:47:11 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo-rgw
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo-rgw
Oct 10 09:47:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 10 09:47:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ruydzo's ganesha conf is defaulting to empty
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ruydzo's ganesha conf is defaulting to empty
Oct 10 09:47:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:47:12 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0
Oct 10 09:47:12 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0
Oct 10 09:47:12 compute-0 sudo[96321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:12 compute-0 sudo[96321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:12 compute-0 sudo[96321]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:12 compute-0 sudo[96346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:12 compute-0 sudo[96346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.617500137 +0000 UTC m=+0.043119153 container create 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 09:47:12 compute-0 systemd[1]: Started libpod-conmon-359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f.scope.
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.596774668 +0000 UTC m=+0.022393704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:12 compute-0 ceph-mon[73551]: pgmap v35: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 6.1 KiB/s wr, 299 op/s
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo-rgw
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Bind address in nfs.cephfs.2.0.compute-0.ruydzo's ganesha conf is defaulting to empty
Oct 10 09:47:12 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:12 compute-0 ceph-mon[73551]: Deploying daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.720901445 +0000 UTC m=+0.146520551 container init 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.729957048 +0000 UTC m=+0.155576064 container start 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.733950156 +0000 UTC m=+0.159569212 container attach 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:47:12 compute-0 competent_stonebraker[96426]: 167 167
Oct 10 09:47:12 compute-0 systemd[1]: libpod-359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f.scope: Deactivated successfully.
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.737689917 +0000 UTC m=+0.163308943 container died 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-129db430c785077e928cff2e1fa06798743ec2bdcfcc21796c7c70b003481065-merged.mount: Deactivated successfully.
Oct 10 09:47:12 compute-0 podman[96410]: 2025-10-10 09:47:12.774803566 +0000 UTC m=+0.200422582 container remove 359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:47:12 compute-0 systemd[1]: libpod-conmon-359b445319346f39d3aba016f71ce3579cd4f8846a9aa765418ad5ad1002d77f.scope: Deactivated successfully.
Oct 10 09:47:12 compute-0 systemd[1]: Reloading.
Oct 10 09:47:12 compute-0 systemd-sysv-generator[96472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:12 compute-0 systemd-rc-local-generator[96469]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:13 compute-0 systemd[1]: Reloading.
Oct 10 09:47:13 compute-0 systemd-rc-local-generator[96509]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:13 compute-0 systemd-sysv-generator[96515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:13 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v36: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 5.6 KiB/s wr, 270 op/s
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 15 completed events
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 podman[96569]: 2025-10-10 09:47:13.678068364 +0000 UTC m=+0.054219460 container create 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14c4a66091b425cc2098d0baaa5439d8861a2301a4264f534f660f4c730988/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14c4a66091b425cc2098d0baaa5439d8861a2301a4264f534f660f4c730988/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14c4a66091b425cc2098d0baaa5439d8861a2301a4264f534f660f4c730988/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14c4a66091b425cc2098d0baaa5439d8861a2301a4264f534f660f4c730988/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:13 compute-0 podman[96569]: 2025-10-10 09:47:13.745344386 +0000 UTC m=+0.121495512 container init 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:47:13 compute-0 podman[96569]: 2025-10-10 09:47:13.653957696 +0000 UTC m=+0.030108812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:13 compute-0 podman[96569]: 2025-10-10 09:47:13.759995349 +0000 UTC m=+0.136146445 container start 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:47:13 compute-0 bash[96569]: 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84
Oct 10 09:47:13 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:47:13 compute-0 sudo[96346]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 6add9727-89f1-4824-a2a1-362dd0041280 (Updating nfs.cephfs deployment (+3 -> 3))
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 6add9727-89f1-4824-a2a1-362dd0041280 (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 9ed56a50-969f-4b67-9531-b2b6d305b577 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 10 09:47:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.ehhoyw on compute-1
Oct 10 09:47:13 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.ehhoyw on compute-1
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct 10 09:47:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:13 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:47:14 compute-0 ceph-mon[73551]: pgmap v36: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 5.6 KiB/s wr, 270 op/s
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-0 ceph-mon[73551]: Deploying daemon haproxy.nfs.cephfs.compute-1.ehhoyw on compute-1
Oct 10 09:47:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v37: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 6.6 KiB/s wr, 239 op/s
Oct 10 09:47:16 compute-0 ceph-mon[73551]: pgmap v37: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 6.6 KiB/s wr, 239 op/s
Oct 10 09:47:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v38: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:18 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 16 completed events
Oct 10 09:47:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-0 ceph-mon[73551]: pgmap v38: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:18 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:47:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:47:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.gptveb on compute-0
Oct 10 09:47:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.gptveb on compute-0
Oct 10 09:47:19 compute-0 sudo[96638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:19 compute-0 sudo[96638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:19 compute-0 sudo[96638]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:19 compute-0 sudo[96663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:19 compute-0 sudo[96663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v39: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:19 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-0 ceph-mon[73551]: Deploying daemon haproxy.nfs.cephfs.compute-0.gptveb on compute-0
Oct 10 09:47:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:20 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa840000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:20 compute-0 ceph-mon[73551]: pgmap v39: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v40: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:21 compute-0 podman[96728]: 2025-10-10 09:47:21.958053677 +0000 UTC m=+2.482311694 container create e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 systemd[1]: Started libpod-conmon-e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401.scope.
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:21.931180059 +0000 UTC m=+2.455438106 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:22.051521484 +0000 UTC m=+2.575779541 container init e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:22.060843845 +0000 UTC m=+2.585101892 container start e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:22.06438804 +0000 UTC m=+2.588646087 container attach e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 loving_bhaskara[96848]: 0 0
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:22.069169754 +0000 UTC m=+2.593427801 container died e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 systemd[1]: libpod-e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401.scope: Deactivated successfully.
Oct 10 09:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca24172a2db5dcfd0179c6db452bf30199dda1575cfad8c525398bb4f153aff-merged.mount: Deactivated successfully.
Oct 10 09:47:22 compute-0 podman[96728]: 2025-10-10 09:47:22.118751104 +0000 UTC m=+2.643009151 container remove e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401 (image=quay.io/ceph/haproxy:2.3, name=loving_bhaskara)
Oct 10 09:47:22 compute-0 systemd[1]: libpod-conmon-e27a7b9a2561f3424f91127f65b50311ec782a3e24c91506a2aea6e2a4f58401.scope: Deactivated successfully.
Oct 10 09:47:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:22 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c0016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:22 compute-0 systemd[1]: Reloading.
Oct 10 09:47:22 compute-0 systemd-rc-local-generator[96895]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:22 compute-0 systemd-sysv-generator[96899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:22 compute-0 systemd[1]: Reloading.
Oct 10 09:47:22 compute-0 systemd-rc-local-generator[96937]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:22 compute-0 systemd-sysv-generator[96941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:22 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.gptveb for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:22 compute-0 ceph-mon[73551]: pgmap v40: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:23 compute-0 podman[96993]: 2025-10-10 09:47:23.109548389 +0000 UTC m=+0.050113819 container create 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda51636d2b32f30199e990d17af8e674f2e77d38febe8fb79262960bad0bc84/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:23 compute-0 podman[96993]: 2025-10-10 09:47:23.168702849 +0000 UTC m=+0.109268289 container init 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:47:23 compute-0 podman[96993]: 2025-10-10 09:47:23.175790288 +0000 UTC m=+0.116355708 container start 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:47:23 compute-0 bash[96993]: 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6
Oct 10 09:47:23 compute-0 podman[96993]: 2025-10-10 09:47:23.087731145 +0000 UTC m=+0.028296605 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:23 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.gptveb for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [NOTICE] 282/094723 (2) : New worker #1 (4) forked
Oct 10 09:47:23 compute-0 sudo[96663]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:23 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.eokdol on compute-2
Oct 10 09:47:23 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.eokdol on compute-2
Oct 10 09:47:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v41: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:24 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:24 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-0 ceph-mon[73551]: Deploying daemon haproxy.nfs.cephfs.compute-2.eokdol on compute-2
Oct 10 09:47:24 compute-0 ceph-mon[73551]: pgmap v41: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:24 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v42: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:26 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:26 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:26 compute-0 ceph-mon[73551]: pgmap v42: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:47:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:47:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v43: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct 10 09:47:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.fcbgvm on compute-2
Oct 10 09:47:27 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.fcbgvm on compute-2
Oct 10 09:47:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.414503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648414705, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6779, "num_deletes": 251, "total_data_size": 12712241, "memory_usage": 13508256, "flush_reason": "Manual Compaction"}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648482076, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11340509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 6916, "table_properties": {"data_size": 11316430, "index_size": 15133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75919, "raw_average_key_size": 24, "raw_value_size": 11256483, "raw_average_value_size": 3578, "num_data_blocks": 671, "num_entries": 3146, "num_filter_entries": 3146, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089398, "oldest_key_time": 1760089398, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 67928 microseconds, and 42384 cpu microseconds.
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.482292) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11340509 bytes OK
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.482591) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.484184) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.484223) EVENT_LOG_v1 {"time_micros": 1760089648484215, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.484252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12681673, prev total WAL file size 12681673, number of live WAL files 2.
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.489233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(10MB) 13(57KB) 8(1944B)]
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648489410, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11400947, "oldest_snapshot_seqno": -1}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-0 ceph-mon[73551]: pgmap v43: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:28 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:28 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:28 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:28 compute-0 ceph-mon[73551]: Deploying daemon keepalived.nfs.cephfs.compute-2.fcbgvm on compute-2
Oct 10 09:47:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2968 keys, 11382966 bytes, temperature: kUnknown
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648566304, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11382966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359228, "index_size": 15245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7429, "raw_key_size": 74901, "raw_average_key_size": 25, "raw_value_size": 11300828, "raw_average_value_size": 3807, "num_data_blocks": 676, "num_entries": 2968, "num_filter_entries": 2968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.566679) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11382966 bytes
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.567727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 147.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(10.9, 0.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3255, records dropped: 287 output_compression: NoCompression
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.567749) EVENT_LOG_v1 {"time_micros": 1760089648567737, "job": 4, "event": "compaction_finished", "compaction_time_micros": 77055, "compaction_time_cpu_micros": 38955, "output_level": 6, "num_output_files": 1, "total_output_size": 11382966, "num_input_records": 3255, "num_output_records": 2968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:28.489058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648572554, "job": 0, "event": "table_file_deletion", "file_number": 19}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648572673, "job": 0, "event": "table_file_deletion", "file_number": 13}
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648572741, "job": 0, "event": "table_file_deletion", "file_number": 8}
Oct 10 09:47:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v44: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c001b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:30 compute-0 ceph-mon[73551]: pgmap v44: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v45: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:47:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c001b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:47:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.twbftp on compute-1
Oct 10 09:47:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.twbftp on compute-1
Oct 10 09:47:32 compute-0 ceph-mon[73551]: pgmap v45: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:32 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:47:33
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control', 'images', 'backups', '.nfs', '.mgr']
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v46: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 09:47:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:47:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:47:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 10 09:47:33 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:33 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:33 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:33 compute-0 ceph-mon[73551]: Deploying daemon keepalived.nfs.cephfs.compute-1.twbftp on compute-1
Oct 10 09:47:33 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 10 09:47:33 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev c1632da4-9295-4c54-8849-6dace08cdd44 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 10 09:47:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 10 09:47:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 10 09:47:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 10 09:47:34 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev ebaa5e0d-6c93-44b5-a880-3f32c0bcef01 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 10 09:47:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:34 compute-0 ceph-mon[73551]: pgmap v46: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:34 compute-0 ceph-mon[73551]: osdmap e52: 3 total, 3 up, 3 in
Oct 10 09:47:34 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c001b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v49: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:47:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 10 09:47:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 54 pg[8.0( v 51'44 (0'0,51'44] local-lis/les=40/41 n=5 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=14.787871361s) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 51'43 mlcod 51'43 active pruub 177.133285522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:35 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 3345a86d-7a54-47c9-97a0-9aeffc0baee8 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 10 09:47:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 54 pg[8.0( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=14.787871361s) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 51'43 mlcod 0'0 unknown pruub 177.133285522s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:35 compute-0 ceph-mon[73551]: osdmap e53: 3 total, 3 up, 3 in
Oct 10 09:47:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 10 09:47:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 082cd7cd-382b-429f-a5fc-16ca2b9e3783 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 10 09:47:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:36 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.14( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.15( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.6( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.16( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.17( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.10( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.11( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.2( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.3( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.8( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.f( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.9( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.a( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.e( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.b( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.c( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1( v 51'44 (0'0,51'44] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.d( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.5( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.7( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.4( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1a( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1b( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.19( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.18( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1e( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1d( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1f( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1c( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.13( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.12( v 51'44 lc 0'0 (0'0,51'44] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.6( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-mon[73551]: pgmap v49: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:47:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:36 compute-0 ceph-mon[73551]: osdmap e54: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:36 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:36 compute-0 ceph-mon[73551]: osdmap e55: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.10( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.11( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.15( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.3( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.16( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.2( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.9( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.e( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.a( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.0( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 51'43 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.5( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1a( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.7( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.18( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1d( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.d( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.13( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1e( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.1f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:36 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 55 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [0] r=0 lpr=54 pi=[40,54)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.mciijj on compute-0
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.mciijj on compute-0
Oct 10 09:47:37 compute-0 sudo[97024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:37 compute-0 sudo[97024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:37 compute-0 sudo[97024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:37 compute-0 sudo[97049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:37 compute-0 sudo[97049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v52: 229 pgs: 62 unknown, 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Oct 10 09:47:37 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 10 09:47:37 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 56 pg[9.0( v 44'6 (0'0,44'6] local-lis/les=43/44 n=6 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=14.356333733s) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 44'5 mlcod 44'5 active pruub 178.722915649s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:37 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 804b5c76-2ca9-4b91-a17c-ad4e7747f107 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct 10 09:47:37 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 56 pg[9.0( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=14.356333733s) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 44'5 mlcod 0'0 unknown pruub 178.722915649s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-0 ceph-mon[73551]: osdmap e56: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct 10 09:47:38 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 10 09:47:38 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 10 09:47:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 10 09:47:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.15( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.7( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.14( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.17( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.11( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.10( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.2( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.16( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.3( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.e( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.9( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.8( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.b( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.f( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.c( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.d( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.a( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.6( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1( v 44'6 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.4( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.5( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1a( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1b( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.18( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.19( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 7cc50c36-c0d2-4f81-98b4-c5b0672c5822 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev c1632da4-9295-4c54-8849-6dace08cdd44 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event c1632da4-9295-4c54-8849-6dace08cdd44 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev ebaa5e0d-6c93-44b5-a880-3f32c0bcef01 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event ebaa5e0d-6c93-44b5-a880-3f32c0bcef01 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 3345a86d-7a54-47c9-97a0-9aeffc0baee8 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 3345a86d-7a54-47c9-97a0-9aeffc0baee8 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 082cd7cd-382b-429f-a5fc-16ca2b9e3783 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 082cd7cd-382b-429f-a5fc-16ca2b9e3783 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 804b5c76-2ca9-4b91-a17c-ad4e7747f107 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 804b5c76-2ca9-4b91-a17c-ad4e7747f107 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 7cc50c36-c0d2-4f81-98b4-c5b0672c5822 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 10 09:47:38 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 7cc50c36-c0d2-4f81-98b4-c5b0672c5822 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1c( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1d( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1f( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1e( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.13( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.15( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.7( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.12( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.17( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.11( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.14( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.10( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.3( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.2( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.e( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.9( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.16( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.b( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.8( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.f( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.c( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.0( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 44'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.6( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.a( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.4( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.5( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1a( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1b( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.18( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:38 compute-0 ceph-mon[73551]: Deploying daemon keepalived.nfs.cephfs.compute-0.mciijj on compute-0
Oct 10 09:47:38 compute-0 ceph-mon[73551]: pgmap v52: 229 pgs: 62 unknown, 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 8.6 deep-scrub starts
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 8.6 deep-scrub ok
Oct 10 09:47:38 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 7.1c scrub starts
Oct 10 09:47:38 compute-0 ceph-mon[73551]: 7.1c scrub ok
Oct 10 09:47:38 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:38 compute-0 ceph-mon[73551]: osdmap e57: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.19( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1c( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1f( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.1e( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.13( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 57 pg[9.12( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v55: 291 pgs: 62 unknown, 229 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct 10 09:47:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:39 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 10 09:47:39 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 10 09:47:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 10 09:47:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 10 09:47:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 10 09:47:39 compute-0 ceph-mon[73551]: 8.10 scrub starts
Oct 10 09:47:39 compute-0 ceph-mon[73551]: 8.10 scrub ok
Oct 10 09:47:39 compute-0 ceph-mon[73551]: 7.1f scrub starts
Oct 10 09:47:39 compute-0 ceph-mon[73551]: 7.1f scrub ok
Oct 10 09:47:39 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:39 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:40 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 10 09:47:40 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 10 09:47:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.795807984 +0000 UTC m=+2.821939459 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:47:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.815672675 +0000 UTC m=+2.841804130 container create c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 10 09:47:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 58 pg[11.0( v 48'48 (0'0,48'48] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=15.390714645s) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 48'47 mlcod 48'47 active pruub 182.804336548s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.0( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=15.390714645s) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 48'47 mlcod 0'0 unknown pruub 182.804336548s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.5( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.6( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.8( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.7( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.3( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.2( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.4( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1d( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1e( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1f( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.9( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.a( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.b( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.c( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.d( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.e( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.f( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.11( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.10( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.12( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.13( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.14( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-mon[73551]: pgmap v55: 291 pgs: 62 unknown, 229 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:40 compute-0 ceph-mon[73551]: 8.11 scrub starts
Oct 10 09:47:40 compute-0 ceph-mon[73551]: 8.11 scrub ok
Oct 10 09:47:40 compute-0 ceph-mon[73551]: 7.1d scrub starts
Oct 10 09:47:40 compute-0 ceph-mon[73551]: 7.1d scrub ok
Oct 10 09:47:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:40 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:40 compute-0 ceph-mon[73551]: osdmap e58: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-0 ceph-mon[73551]: osdmap e59: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.15( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.16( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.18( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.19( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.17( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1a( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1b( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 59 pg[11.1c( v 48'48 lc 0'0 (0'0,48'48] local-lis/les=47/48 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-0 systemd[1]: Started libpod-conmon-c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9.scope.
Oct 10 09:47:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.916445848 +0000 UTC m=+2.942577323 container init c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.expose-services=, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.930712209 +0000 UTC m=+2.956843664 container start c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.934514951 +0000 UTC m=+2.960646416 container attach c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 09:47:40 compute-0 compassionate_goldstine[97214]: 0 0
Oct 10 09:47:40 compute-0 systemd[1]: libpod-c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9.scope: Deactivated successfully.
Oct 10 09:47:40 compute-0 conmon[97214]: conmon c3498d6bf459fb582c47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9.scope/container/memory.events
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.940253196 +0000 UTC m=+2.966384701 container died c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, io.openshift.expose-services=, com.redhat.component=keepalived-container, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 09:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-698d86891aaadaf014bfb64fa89ffb4aa61f16082b4e51061cffcdb1d7a25654-merged.mount: Deactivated successfully.
Oct 10 09:47:40 compute-0 podman[97118]: 2025-10-10 09:47:40.984056881 +0000 UTC m=+3.010188336 container remove c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9 (image=quay.io/ceph/keepalived:2.2.4, name=compassionate_goldstine, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 09:47:40 compute-0 systemd[1]: libpod-conmon-c3498d6bf459fb582c4760973a3091ed584336b88eb21c5ca6a7858f29c535c9.scope: Deactivated successfully.
Oct 10 09:47:41 compute-0 systemd[1]: Reloading.
Oct 10 09:47:41 compute-0 systemd-rc-local-generator[97260]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:41 compute-0 systemd-sysv-generator[97264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:41 compute-0 systemd[1]: Reloading.
Oct 10 09:47:41 compute-0 systemd-rc-local-generator[97298]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:41 compute-0 systemd-sysv-generator[97303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:41 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 10 09:47:41 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 10 09:47:41 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.mciijj for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:41 compute-0 sudo[97337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrylmeizymaelcmarbegltohhmbuczvf ; /usr/bin/python3'
Oct 10 09:47:41 compute-0 sudo[97337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:47:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 10 09:47:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 10 09:47:41 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 10 09:47:41 compute-0 ceph-mon[73551]: 8.14 scrub starts
Oct 10 09:47:41 compute-0 ceph-mon[73551]: 8.14 scrub ok
Oct 10 09:47:41 compute-0 ceph-mon[73551]: 7.12 scrub starts
Oct 10 09:47:41 compute-0 ceph-mon[73551]: 7.12 scrub ok
Oct 10 09:47:41 compute-0 ceph-mon[73551]: osdmap e60: 3 total, 3 up, 3 in
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.14( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.17( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.15( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.16( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.13( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.0( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 48'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.b( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.c( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.e( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.8( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.2( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.d( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.3( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.9( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.6( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.18( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.19( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.10( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1f( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.11( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 60 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [0] r=0 lpr=58 pi=[47,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:41 compute-0 python3[97349]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:47:41 compute-0 podman[97385]: 2025-10-10 09:47:41.892864668 +0000 UTC m=+0.059441910 container create 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64)
Oct 10 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b973b04c2dbcd4a6d3d729799a6d14b5eb213c1d1829daca91a6167354bdb76d/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:41 compute-0 podman[97398]: 2025-10-10 09:47:41.948651209 +0000 UTC m=+0.054871212 container create fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 09:47:41 compute-0 podman[97385]: 2025-10-10 09:47:41.954468867 +0000 UTC m=+0.121046119 container init 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 10 09:47:41 compute-0 podman[97385]: 2025-10-10 09:47:41.870503136 +0000 UTC m=+0.037080388 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:47:41 compute-0 podman[97385]: 2025-10-10 09:47:41.96292566 +0000 UTC m=+0.129502892 container start 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4)
Oct 10 09:47:41 compute-0 bash[97385]: 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213
Oct 10 09:47:41 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.mciijj for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 10 09:47:41 compute-0 systemd[1]: Started libpod-conmon-fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8.scope.
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Starting VRRP child process, pid=4
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: Startup complete
Oct 10 09:47:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:41 2025: (VI_0) Entering BACKUP STATE (init)
Oct 10 09:47:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:42 2025: VRRP_Script(check_backend) succeeded
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:41.9226538 +0000 UTC m=+0.028873843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:47:42 compute-0 sudo[97049]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f3140345658dd522ed514c7789def354e9171a72f1005d52fbac270cc7ab0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f3140345658dd522ed514c7789def354e9171a72f1005d52fbac270cc7ab0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:42.052504952 +0000 UTC m=+0.158725015 container init fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:47:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:42.061261905 +0000 UTC m=+0.167481908 container start fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:47:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:42.065122709 +0000 UTC m=+0.171342722 container attach fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:47:42 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 9ed56a50-969f-4b67-9531-b2b6d305b577 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 10 09:47:42 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 9ed56a50-969f-4b67-9531-b2b6d305b577 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 28 seconds
Oct 10 09:47:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 10 09:47:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:42 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 17d79229-7f16-4656-a067-e455d31351db (Updating alertmanager deployment (+1 -> 1))
Oct 10 09:47:42 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct 10 09:47:42 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct 10 09:47:42 compute-0 sudo[97427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:42 compute-0 sudo[97427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:42 compute-0 sudo[97427]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:42 compute-0 sudo[97527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:42 compute-0 sudo[97527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:42 compute-0 keen_haslett[97422]: could not fetch user info: no user info saved
Oct 10 09:47:42 compute-0 systemd[1]: libpod-fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8.scope: Deactivated successfully.
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:42.320368909 +0000 UTC m=+0.426588922 container died fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-18f3140345658dd522ed514c7789def354e9171a72f1005d52fbac270cc7ab0f-merged.mount: Deactivated successfully.
Oct 10 09:47:42 compute-0 podman[97398]: 2025-10-10 09:47:42.366424676 +0000 UTC m=+0.472644679 container remove fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8 (image=quay.io/ceph/ceph:v19, name=keen_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:47:42 compute-0 systemd[1]: libpod-conmon-fcfc97a3aed3fde0dc8f62f7bf2e051dc2c18dcaf672bc6c10dd0272d382eab8.scope: Deactivated successfully.
Oct 10 09:47:42 compute-0 sudo[97337]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:42 compute-0 sudo[97615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-besedzjousiqovjmgmsipzvyrfdnnycs ; /usr/bin/python3'
Oct 10 09:47:42 compute-0 sudo[97615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:47:42 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 10 09:47:42 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 10 09:47:42 compute-0 python3[97628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:47:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:42 compute-0 podman[97657]: 2025-10-10 09:47:42.812469705 +0000 UTC m=+0.056459814 container create 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:47:42 compute-0 systemd[1]: Started libpod-conmon-8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3.scope.
Oct 10 09:47:42 compute-0 podman[97657]: 2025-10-10 09:47:42.780862185 +0000 UTC m=+0.024852354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 10 09:47:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d93ccf75022524b4d582b541ea8b98b13f7cac2abfcd1764544df5518f8842/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d93ccf75022524b4d582b541ea8b98b13f7cac2abfcd1764544df5518f8842/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:42 compute-0 podman[97657]: 2025-10-10 09:47:42.919910153 +0000 UTC m=+0.163900222 container init 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:47:42 compute-0 podman[97657]: 2025-10-10 09:47:42.93065292 +0000 UTC m=+0.174642989 container start 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:47:42 compute-0 podman[97657]: 2025-10-10 09:47:42.934214795 +0000 UTC m=+0.178204874 container attach 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:47:43 compute-0 ceph-mon[73551]: pgmap v58: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:43 compute-0 ceph-mon[73551]: 8.15 scrub starts
Oct 10 09:47:43 compute-0 ceph-mon[73551]: 8.15 scrub ok
Oct 10 09:47:43 compute-0 ceph-mon[73551]: 7.a scrub starts
Oct 10 09:47:43 compute-0 ceph-mon[73551]: 7.a scrub ok
Oct 10 09:47:43 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 10 09:47:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 10 09:47:43 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 23 completed events
Oct 10 09:47:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-0 jovial_mclean[97672]: {
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "user_id": "openstack",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "display_name": "openstack",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "email": "",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "suspended": 0,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "max_buckets": 1000,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "subusers": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "keys": [
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         {
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:             "user": "openstack",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:             "access_key": "ZCHON2TK9LFT19PR09O0",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:             "secret_key": "BH1Y8gTt5crfKZvtGCrWX2ZxWPQ6pACNwdIHtAnl",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:             "active": true,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:             "create_date": "2025-10-10T09:47:43.120408Z"
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         }
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     ],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "swift_keys": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "caps": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "op_mask": "read, write, delete",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "default_placement": "",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "default_storage_class": "",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "placement_tags": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "bucket_quota": {
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "enabled": false,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "check_on_raw": false,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_size": -1,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_size_kb": 0,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_objects": -1
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     },
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "user_quota": {
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "enabled": false,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "check_on_raw": false,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_size": -1,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_size_kb": 0,
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:         "max_objects": -1
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     },
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "temp_url_keys": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "type": "rgw",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "mfa_ids": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "account_id": "",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "path": "/",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "create_date": "2025-10-10T09:47:43.119661Z",
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "tags": [],
Oct 10 09:47:43 compute-0 jovial_mclean[97672]:     "group_ids": []
Oct 10 09:47:43 compute-0 jovial_mclean[97672]: }
Oct 10 09:47:43 compute-0 jovial_mclean[97672]: 
Oct 10 09:47:43 compute-0 systemd[1]: libpod-8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3.scope: Deactivated successfully.
Oct 10 09:47:43 compute-0 podman[97657]: 2025-10-10 09:47:43.769423767 +0000 UTC m=+1.013413846 container died 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:47:44 compute-0 ceph-mon[73551]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 10 09:47:44 compute-0 ceph-mon[73551]: 8.3 scrub starts
Oct 10 09:47:44 compute-0 ceph-mon[73551]: 8.3 scrub ok
Oct 10 09:47:44 compute-0 ceph-mon[73551]: 7.13 scrub starts
Oct 10 09:47:44 compute-0 ceph-mon[73551]: 7.13 scrub ok
Oct 10 09:47:44 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d93ccf75022524b4d582b541ea8b98b13f7cac2abfcd1764544df5518f8842-merged.mount: Deactivated successfully.
Oct 10 09:47:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:44 compute-0 podman[97657]: 2025-10-10 09:47:44.195847582 +0000 UTC m=+1.439837651 container remove 8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3 (image=quay.io/ceph/ceph:v19, name=jovial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:47:44 compute-0 systemd[1]: libpod-conmon-8df49a63cfeeb4298fbc1ef5f3c9ca5bc7c83bc51a920b9520943c53ccd005a3.scope: Deactivated successfully.
Oct 10 09:47:44 compute-0 sudo[97615]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.245802185 +0000 UTC m=+1.612049701 volume create 4c40a7ea51dc089410c0425033d9dbfd5d600e422d1d7ba603db35a679dbe816
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.25400644 +0000 UTC m=+1.620253956 container create da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.230206231 +0000 UTC m=+1.596453787 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:47:44 compute-0 systemd[1]: Started libpod-conmon-da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55.scope.
Oct 10 09:47:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284d5421f59dcd882b9dc35ed149f3b534663114c02b3f3a2694c9fdadd3608b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.337832036 +0000 UTC m=+1.704079582 container init da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.34476963 +0000 UTC m=+1.711017136 container start da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 vibrant_visvesvaraya[97894]: 65534 65534
Oct 10 09:47:44 compute-0 systemd[1]: libpod-da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55.scope: Deactivated successfully.
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.348228042 +0000 UTC m=+1.714475568 container attach da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.349032998 +0000 UTC m=+1.715280514 container died da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-284d5421f59dcd882b9dc35ed149f3b534663114c02b3f3a2694c9fdadd3608b-merged.mount: Deactivated successfully.
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.391247891 +0000 UTC m=+1.757495397 container remove da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_visvesvaraya, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97643]: 2025-10-10 09:47:44.394781084 +0000 UTC m=+1.761028590 volume remove 4c40a7ea51dc089410c0425033d9dbfd5d600e422d1d7ba603db35a679dbe816
Oct 10 09:47:44 compute-0 systemd[1]: libpod-conmon-da6382e818a0e051cb263e98218b2c91c99120e8f7e7818e5e2f249ce6818d55.scope: Deactivated successfully.
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.474223849 +0000 UTC m=+0.052054271 volume create 0ee4b454a69b5044b2acc221ac8c45bea7adb7a92223944d4ef8faf4a6558298
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.486572188 +0000 UTC m=+0.064402570 container create 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 10 09:47:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 10 09:47:44 compute-0 systemd[1]: Started libpod-conmon-374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36.scope.
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.450115071 +0000 UTC m=+0.027945543 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:47:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848461a8ee34a1cc95073eb1101fd8333114f04efe5bdbfb3b9d1723f7a0d853/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.590048208 +0000 UTC m=+0.167878690 container init 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.600362031 +0000 UTC m=+0.178192453 container start 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 boring_lederberg[97930]: 65534 65534
Oct 10 09:47:44 compute-0 systemd[1]: libpod-374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36.scope: Deactivated successfully.
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.604591127 +0000 UTC m=+0.182421589 container attach 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.60498195 +0000 UTC m=+0.182812382 container died 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-848461a8ee34a1cc95073eb1101fd8333114f04efe5bdbfb3b9d1723f7a0d853-merged.mount: Deactivated successfully.
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.659075566 +0000 UTC m=+0.236905978 container remove 374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36 (image=quay.io/prometheus/alertmanager:v0.25.0, name=boring_lederberg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:44 compute-0 podman[97913]: 2025-10-10 09:47:44.66258143 +0000 UTC m=+0.240411822 volume remove 0ee4b454a69b5044b2acc221ac8c45bea7adb7a92223944d4ef8faf4a6558298
Oct 10 09:47:44 compute-0 systemd[1]: libpod-conmon-374f22c825e284d8a1d9c6e5f71a08863a6e54ae23175fe767e79e3886dceb36.scope: Deactivated successfully.
Oct 10 09:47:44 compute-0 systemd[1]: Reloading.
Oct 10 09:47:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:44 compute-0 systemd-rc-local-generator[97997]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:44 compute-0 systemd-sysv-generator[98002]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:44 compute-0 python3[97970]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [dashboard INFO request] [192.168.122.100:45370] [GET] [200] [0.118s] [6.3K] [4b877538-dcc9-456c-823c-b68c219430b6] /
Oct 10 09:47:45 compute-0 ceph-mon[73551]: pgmap v60: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:45 compute-0 ceph-mon[73551]: 8.17 scrub starts
Oct 10 09:47:45 compute-0 ceph-mon[73551]: 8.17 scrub ok
Oct 10 09:47:45 compute-0 ceph-mon[73551]: 7.11 scrub starts
Oct 10 09:47:45 compute-0 ceph-mon[73551]: 7.11 scrub ok
Oct 10 09:47:45 compute-0 systemd[1]: Reloading.
Oct 10 09:47:45 compute-0 systemd-rc-local-generator[98042]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:45 compute-0 systemd-sysv-generator[98047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:45 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:45 compute-0 python3[98068]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [dashboard INFO request] [192.168.122.100:45374] [GET] [200] [0.003s] [6.3K] [61ed4e64-caf2-4ec6-9ead-5ceb711c1db8] /
Oct 10 09:47:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:47:45 2025: (VI_0) Entering MASTER STATE
Oct 10 09:47:45 compute-0 podman[98119]: 2025-10-10 09:47:45.73289109 +0000 UTC m=+0.042621666 volume create d3d11b6e2c04c16f3fdfb68b83b7136cad02b37c9703134f388fc9cbf8c1997d
Oct 10 09:47:45 compute-0 podman[98119]: 2025-10-10 09:47:45.745274881 +0000 UTC m=+0.055005457 container create a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:45 compute-0 podman[98119]: 2025-10-10 09:47:45.712577625 +0000 UTC m=+0.022308221 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4410f3bb456ba7f0088fed6a62919964611c1ad0c46f266c88475e476aafc196/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4410f3bb456ba7f0088fed6a62919964611c1ad0c46f266c88475e476aafc196/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:45 compute-0 podman[98119]: 2025-10-10 09:47:45.83041934 +0000 UTC m=+0.140149936 container init a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:45 compute-0 podman[98119]: 2025-10-10 09:47:45.840304708 +0000 UTC m=+0.150035324 container start a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:47:45 compute-0 bash[98119]: a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6
Oct 10 09:47:45 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.872Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.872Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.883Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.886Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 10 09:47:45 compute-0 sudo[97527]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.937Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.938Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 17d79229-7f16-4656-a067-e455d31351db (Updating alertmanager deployment (+1 -> 1))
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 17d79229-7f16-4656-a067-e455d31351db (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Oct 10 09:47:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.948Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 10 09:47:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:45.948Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 10 09:47:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev e4d04d97-8a2c-45ca-ab40-f624398b9e75 (Updating grafana deployment (+1 -> 1))
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct 10 09:47:45 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.14( v 60'57 (0'0,60'57] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743201256s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 184.432464600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.16( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690731049s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380035400s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.746603966s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.435928345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.16( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690690994s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380035400s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.14( v 60'57 (0'0,60'57] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743113518s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 184.432464600s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.746558189s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.435928345s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.7( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687851906s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.377502441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.7( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687830925s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.377502441s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.6( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.673133850s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.362899780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.17( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.746088982s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.435897827s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.6( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.673093796s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.362899780s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.17( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.746068954s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.435897827s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.15( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687622070s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.377487183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.15( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687602043s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.377487183s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679567337s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369628906s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679543495s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369628906s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.16( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745933533s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436126709s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.15( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679720879s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369934082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.15( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679697990s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369934082s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.16( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745897293s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436126709s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.16( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679498672s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369903564s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.17( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687140465s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.377563477s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.16( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.679479599s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369903564s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.17( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687115669s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.377563477s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.13( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745646477s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436157227s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.13( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745616913s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436157227s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.10( v 57'47 (0'0,57'47] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678233147s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=55'45 lcod 55'46 mlcod 55'46 active pruub 187.368881226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.11( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687014580s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.377563477s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.11( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686864853s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.377563477s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.10( v 57'47 (0'0,57'47] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678192139s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=55'45 lcod 55'46 mlcod 0'0 unknown NOTIFY pruub 187.368881226s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.10( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688853264s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.379806519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745232582s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436218262s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745210648s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436218262s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.11( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678569794s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369598389s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.10( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688818932s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.379806519s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.11( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678547859s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369598389s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745159149s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436355591s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.2( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678420067s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369659424s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745141983s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436355591s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.3( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688664436s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.379913330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.2( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678396225s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369659424s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.3( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688632965s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.379913330s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.3( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678475380s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369934082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.3( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678437233s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369934082s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.e( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688470840s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.379989624s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.e( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688450813s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.379989624s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.9( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688394547s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380020142s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678503036s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370101929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.9( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.688381195s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380020142s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.678442955s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370101929s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677595139s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369918823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677085876s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.369491577s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677540779s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369918823s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677059174s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.369491577s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.9( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677455902s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370025635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.9( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677429199s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370025635s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.b( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687583923s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380203247s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.b( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687562943s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380203247s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.8( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687643051s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380310059s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.a( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677399635s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370101929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.8( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687621117s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380310059s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.a( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677386284s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370101929s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.f( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687467575s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380310059s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.f( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687436104s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380310059s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.e( v 60'57 (0'0,60'57] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743637085s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 184.436553955s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.e( v 60'57 (0'0,60'57] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743602753s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 184.436553955s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.d( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677633286s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370605469s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.d( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677610397s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370605469s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743492126s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436523438s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687335014s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380386353s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743466377s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436523438s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677248001s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370376587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743424416s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436553955s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743412971s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436553955s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677228928s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370376587s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.8( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743344307s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436584473s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.8( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743330956s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436584473s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676832199s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370132446s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.a( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687139511s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380447388s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676814079s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370132446s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.a( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687099457s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380447388s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.3( v 60'57 (0'0,60'57] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743248940s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 184.436706543s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.3( v 60'57 (0'0,60'57] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743214607s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 184.436706543s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743206024s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436767578s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.6( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686851501s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380432129s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743185043s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436767578s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.6( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686829567s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380432129s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687323570s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380386353s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.5( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676479340s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370254517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743023872s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436813354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.5( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686707497s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380508423s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.5( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676459312s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370254517s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.5( v 44'6 (0'0,44'6] local-lis/les=56/57 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686694145s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380508423s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=58/60 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.743004799s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436813354s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676440239s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370346069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676428795s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370346069s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676289558s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370452881s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.19( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.742784500s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.436981201s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1b( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676258087s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370452881s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.18( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686301231s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.380569458s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.18( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.686289787s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.380569458s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.742715836s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.437042236s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676246643s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370590210s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.742695808s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.437042236s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676226616s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370590210s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745224953s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.439636230s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745211601s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.439636230s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.18( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676065445s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370574951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.18( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.676044464s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370574951s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677892685s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.372436523s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745662689s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.440231323s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1f( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677875519s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.372436523s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745642662s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.440231323s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745023727s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.439682007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745010376s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.439682007s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.19( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.742769241s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.436981201s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745060921s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active pruub 184.439849854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=58/60 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61 pruub=11.745038986s) [1] r=-1 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.439849854s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.1d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691508293s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.386383057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.675749779s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.370635986s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.1d( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691489220s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.386383057s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.1c( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.675728798s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.370635986s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.12( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691952705s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.386947632s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.12( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691931725s) [1] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.386947632s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.13( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691773415s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active pruub 181.386856079s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[9.13( v 44'6 (0'0,44'6] local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.691757202s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.386856079s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677320480s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active pruub 187.372436523s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.677300453s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.372436523s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct 10 09:47:46 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct 10 09:47:46 compute-0 ceph-mon[73551]: 8.8 scrub starts
Oct 10 09:47:46 compute-0 ceph-mon[73551]: 8.8 scrub ok
Oct 10 09:47:46 compute-0 ceph-mon[73551]: 7.16 deep-scrub starts
Oct 10 09:47:46 compute-0 ceph-mon[73551]: 7.16 deep-scrub ok
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.10( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.1b( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.18( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.1e( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.f( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.2( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.3( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.c( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.6( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.e( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.9( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.8( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.b( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.4( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.1c( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.10( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[7.13( empty local-lis/les=0/0 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 61 pg[12.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-0 sudo[98158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:46 compute-0 sudo[98158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:46 compute-0 sudo[98158]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:46 compute-0 sudo[98183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:46 compute-0 sudo[98183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 10 09:47:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 10 09:47:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 10 09:47:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.b( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 8.f deep-scrub starts
Oct 10 09:47:47 compute-0 ceph-mon[73551]: pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 8.f deep-scrub ok
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 7.15 scrub starts
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 7.15 scrub ok
Oct 10 09:47:47 compute-0 ceph-mon[73551]: Regenerating cephadm self-signed grafana TLS certificates
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 09:47:47 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.1c( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-mon[73551]: osdmap e61: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.8( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-mon[73551]: Deploying daemon grafana.compute-0 on compute-0
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.f( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 7.1a scrub starts
Oct 10 09:47:47 compute-0 ceph-mon[73551]: 7.1a scrub ok
Oct 10 09:47:47 compute-0 ceph-mon[73551]: osdmap e62: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.e( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.4( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.3( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.2( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.8( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.a( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.6( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.e( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.b( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.c( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.6( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.12( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.1e( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.9( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.10( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.1b( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.18( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.13( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[7.10( empty local-lis/les=61/62 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 62 pg[12.19( empty local-lis/les=61/62 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct 10 09:47:47 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 09:47:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 10 09:47:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:47.887Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000303974s
Oct 10 09:47:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 10 09:47:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-0 ceph-mon[73551]: 9.14 scrub starts
Oct 10 09:47:48 compute-0 ceph-mon[73551]: 9.14 scrub ok
Oct 10 09:47:48 compute-0 ceph-mon[73551]: pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:48 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 09:47:48 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 09:47:48 compute-0 ceph-mon[73551]: osdmap e63: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 10 09:47:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 10 09:47:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:48 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 24 completed events
Oct 10 09:47:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:48 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event a797963c-fe68-46fa-86d7-423c2da9f6b3 (Global Recovery Event) in 10 seconds
Oct 10 09:47:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 10 09:47:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 4 objects/s recovering
Oct 10 09:47:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct 10 09:47:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct 10 09:47:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 10 09:47:49 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 10 09:47:49 compute-0 ceph-mon[73551]: 11.15 scrub starts
Oct 10 09:47:49 compute-0 ceph-mon[73551]: 12.13 deep-scrub starts
Oct 10 09:47:49 compute-0 ceph-mon[73551]: 11.15 scrub ok
Oct 10 09:47:49 compute-0 ceph-mon[73551]: 12.13 deep-scrub ok
Oct 10 09:47:49 compute-0 ceph-mon[73551]: osdmap e64: 3 total, 3 up, 3 in
Oct 10 09:47:49 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 10 09:47:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 10 09:47:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 10 09:47:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 10 09:47:50 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-mon[73551]: 9.2 scrub starts
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-mon[73551]: 9.2 scrub ok
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-mon[73551]: pgmap v67: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 4 objects/s recovering
Oct 10 09:47:50 compute-0 ceph-mon[73551]: 11.0 scrub starts
Oct 10 09:47:50 compute-0 ceph-mon[73551]: 11.0 scrub ok
Oct 10 09:47:50 compute-0 ceph-mon[73551]: osdmap e65: 3 total, 3 up, 3 in
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 806 B/s, 25 objects/s recovering
Oct 10 09:47:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Oct 10 09:47:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Oct 10 09:47:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 10 09:47:51 compute-0 ceph-mon[73551]: 10.17 scrub starts
Oct 10 09:47:51 compute-0 ceph-mon[73551]: 10.17 scrub ok
Oct 10 09:47:51 compute-0 ceph-mon[73551]: 11.c scrub starts
Oct 10 09:47:51 compute-0 ceph-mon[73551]: 11.c scrub ok
Oct 10 09:47:51 compute-0 ceph-mon[73551]: osdmap e66: 3 total, 3 up, 3 in
Oct 10 09:47:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 10 09:47:51 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 67 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Oct 10 09:47:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.644073166 +0000 UTC m=+5.822543744 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.663552875 +0000 UTC m=+5.842023433 container create dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 systemd[1]: Started libpod-conmon-dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818.scope.
Oct 10 09:47:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.757340562 +0000 UTC m=+5.935811130 container init dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 ceph-mon[73551]: 10.1 deep-scrub starts
Oct 10 09:47:52 compute-0 ceph-mon[73551]: 10.1 deep-scrub ok
Oct 10 09:47:52 compute-0 ceph-mon[73551]: pgmap v70: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 806 B/s, 25 objects/s recovering
Oct 10 09:47:52 compute-0 ceph-mon[73551]: 11.b deep-scrub starts
Oct 10 09:47:52 compute-0 ceph-mon[73551]: 11.b deep-scrub ok
Oct 10 09:47:52 compute-0 ceph-mon[73551]: osdmap e67: 3 total, 3 up, 3 in
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.766078814 +0000 UTC m=+5.944549372 container start dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.769724882 +0000 UTC m=+5.948195440 container attach dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 magical_brattain[98467]: 472 0
Oct 10 09:47:52 compute-0 systemd[1]: libpod-dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818.scope: Deactivated successfully.
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.771003953 +0000 UTC m=+5.949474511 container died dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4b77acbff6a1165707069c7c501060cb6fc5a6fef6ab09eaa325d6a4f32447-merged.mount: Deactivated successfully.
Oct 10 09:47:52 compute-0 podman[98249]: 2025-10-10 09:47:52.819955103 +0000 UTC m=+5.998425661 container remove dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818 (image=quay.io/ceph/grafana:10.4.0, name=magical_brattain, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 systemd[1]: libpod-conmon-dee2227123dd6747b16a43a63c1b8097e80a65cbfd0fdda3598f9cf088f27818.scope: Deactivated successfully.
Oct 10 09:47:52 compute-0 podman[98484]: 2025-10-10 09:47:52.903349105 +0000 UTC m=+0.056351110 container create 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:52 compute-0 systemd[1]: Started libpod-conmon-5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde.scope.
Oct 10 09:47:52 compute-0 podman[98484]: 2025-10-10 09:47:52.873345317 +0000 UTC m=+0.026347352 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:47:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:52 compute-0 podman[98484]: 2025-10-10 09:47:52.995869274 +0000 UTC m=+0.148871299 container init 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:53 compute-0 podman[98484]: 2025-10-10 09:47:53.001400155 +0000 UTC m=+0.154402160 container start 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:53 compute-0 reverent_moser[98500]: 472 0
Oct 10 09:47:53 compute-0 systemd[1]: libpod-5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde.scope: Deactivated successfully.
Oct 10 09:47:53 compute-0 podman[98484]: 2025-10-10 09:47:53.005440149 +0000 UTC m=+0.158442154 container attach 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:53 compute-0 podman[98484]: 2025-10-10 09:47:53.005682447 +0000 UTC m=+0.158684442 container died 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f93a1e10dddb95aff2658ba637a1329afe9a347f452b96caaec981d11b2c86db-merged.mount: Deactivated successfully.
Oct 10 09:47:53 compute-0 podman[98484]: 2025-10-10 09:47:53.043100656 +0000 UTC m=+0.196102661 container remove 5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde (image=quay.io/ceph/grafana:10.4.0, name=reverent_moser, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:53 compute-0 systemd[1]: libpod-conmon-5a6c8c2530023d352ce732775a8b5b54105002185e4871f194292914f16d0bde.scope: Deactivated successfully.
Oct 10 09:47:53 compute-0 systemd[1]: Reloading.
Oct 10 09:47:53 compute-0 systemd-rc-local-generator[98544]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:53 compute-0 systemd-sysv-generator[98547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:53 compute-0 systemd[1]: Reloading.
Oct 10 09:47:53 compute-0 systemd-sysv-generator[98589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:53 compute-0 systemd-rc-local-generator[98585]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 632 B/s, 20 objects/s recovering
Oct 10 09:47:53 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Oct 10 09:47:53 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Oct 10 09:47:53 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:53 compute-0 ceph-mon[73551]: 10.1f scrub starts
Oct 10 09:47:53 compute-0 ceph-mon[73551]: 10.1f scrub ok
Oct 10 09:47:53 compute-0 ceph-mon[73551]: 11.9 deep-scrub starts
Oct 10 09:47:53 compute-0 ceph-mon[73551]: 11.9 deep-scrub ok
Oct 10 09:47:53 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 25 completed events
Oct 10 09:47:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:53 compute-0 ceph-mgr[73845]: [progress WARNING root] Starting Global Recovery Event,24 pgs not in active + clean state
Oct 10 09:47:54 compute-0 podman[98646]: 2025-10-10 09:47:54.064471051 +0000 UTC m=+0.068108110 container create 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:54 compute-0 podman[98646]: 2025-10-10 09:47:54.036819762 +0000 UTC m=+0.040456841 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:47:54 compute-0 podman[98646]: 2025-10-10 09:47:54.13620535 +0000 UTC m=+0.139842459 container init 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:54 compute-0 podman[98646]: 2025-10-10 09:47:54.145763634 +0000 UTC m=+0.149400683 container start 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:47:54 compute-0 bash[98646]: 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d
Oct 10 09:47:54 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:54 compute-0 sudo[98183]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev e4d04d97-8a2c-45ca-ab40-f624398b9e75 (Updating grafana deployment (+1 -> 1))
Oct 10 09:47:54 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event e4d04d97-8a2c-45ca-ab40-f624398b9e75 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Oct 10 09:47:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev 57b94cb1-bb87-4a01-89e1-75a3fc43a869 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 10 09:47:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.ofnenu on compute-0
Oct 10 09:47:54 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.ofnenu on compute-0
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.364029211Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-10T09:47:54Z
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366401379Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.36642219Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.36643217Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.36643757Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.36644234Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.36644766Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366452381Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366457531Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366466121Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366470751Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366475781Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366480602Z level=info msg=Target target=[all]
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366492602Z level=info msg="Path Home" path=/usr/share/grafana
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366497702Z level=info msg="Path Data" path=/var/lib/grafana
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366502272Z level=info msg="Path Logs" path=/var/log/grafana
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366510143Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366514843Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=settings t=2025-10-10T09:47:54.366519283Z level=info msg="App mode production"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore t=2025-10-10T09:47:54.366978558Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore t=2025-10-10T09:47:54.367012249Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.368096805Z level=info msg="Starting DB migrations"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.369583234Z level=info msg="Executing migration" id="create migration_log table"
Oct 10 09:47:54 compute-0 PackageKit[31017]: daemon quit
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.377477904Z level=info msg="Migration successfully executed" id="create migration_log table" duration=7.88244ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.381357171Z level=info msg="Executing migration" id="create user table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.382447147Z level=info msg="Migration successfully executed" id="create user table" duration=1.133187ms
Oct 10 09:47:54 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.384613678Z level=info msg="Executing migration" id="add unique index user.login"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.385401124Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=752.335µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.388520797Z level=info msg="Executing migration" id="add unique index user.email"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.389152097Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=628.39µs
Oct 10 09:47:54 compute-0 sudo[98682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.390901405Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.391510256Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=608.33µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.393282183Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.393929115Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=646.612µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.395378532Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct 10 09:47:54 compute-0 sudo[98682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.397617326Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.237423ms
Oct 10 09:47:54 compute-0 sudo[98682]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.39956606Z level=info msg="Executing migration" id="create user table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.400647905Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.082945ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.402649742Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.403290372Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=640.13µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.404955617Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.405616638Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=661.171µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.407555803Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.407925925Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=369.842µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.409553538Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.410080806Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=528.398µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.411525363Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.412502745Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=982.412µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.414200311Z level=info msg="Executing migration" id="Update user table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.414230652Z level=info msg="Migration successfully executed" id="Update user table charset" duration=30.511µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.415967929Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.417142558Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.174288ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.418775432Z level=info msg="Executing migration" id="Add missing user data"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.418992629Z level=info msg="Migration successfully executed" id="Add missing user data" duration=217.507µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.4205695Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.421672557Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.102647ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.424682966Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.42572708Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.041314ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.427820919Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.429418692Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.597183ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.431764969Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.441965964Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.200455ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.444129386Z level=info msg="Executing migration" id="Add uid column to user"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.44550057Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.371625ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.447348462Z level=info msg="Executing migration" id="Update uid column values for users"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.447580548Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=232.557µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.449396779Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.450183165Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=786.266µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.452420318Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.45340609Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=983.502µs
Oct 10 09:47:54 compute-0 sudo[98707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.455877202Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.456700938Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=822.796µs
Oct 10 09:47:54 compute-0 sudo[98707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.458674754Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.45948515Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=809.906µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.461394823Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.462162978Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=767.735µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.464097382Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.464913828Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=817.826µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.466728629Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.466751179Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=23.07µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.468575139Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.469369156Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=794.066µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.47105063Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.471848417Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=797.387µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.473457269Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.474243066Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=785.566µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.475946341Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.476794389Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=847.628µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.478487745Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.482480156Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.992172ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.484250704Z level=info msg="Executing migration" id="create temp_user v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.485416233Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.164879ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.487072017Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.488044009Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=971.632µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.48959531Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.49050605Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=910.17µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.492186186Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.493092035Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=905.529µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.494791912Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.495712101Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=919.589µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.498012787Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.498545034Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=533.148µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.500068215Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.500828179Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=758.064µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.502532236Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.503001681Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=470.315µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.504669446Z level=info msg="Executing migration" id="create star table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.505455152Z level=info msg="Migration successfully executed" id="create star table" duration=785.236µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.506984902Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.507878612Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=892.969µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.509882747Z level=info msg="Executing migration" id="create org table v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.510776097Z level=info msg="Migration successfully executed" id="create org table v1" duration=891.53µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.512460632Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.513290509Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=827.717µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.515100649Z level=info msg="Executing migration" id="create org_user table v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.515863394Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=762.185µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.518996647Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.519852936Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=856.099µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.521910083Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.522901146Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=991.873µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.525006805Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.525932825Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=928.36µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.527855309Z level=info msg="Executing migration" id="Update org table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.52788711Z level=info msg="Migration successfully executed" id="Update org table charset" duration=29.411µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.529524053Z level=info msg="Executing migration" id="Update org_user table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.529554284Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=30.491µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.531128356Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.531359144Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=230.048µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.533646429Z level=info msg="Executing migration" id="create dashboard table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.534790986Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.132387ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.536965378Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.537923449Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=957.911µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.539848223Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.540836515Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=987.082µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.542906393Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.543699079Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=792.196µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.545730806Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.546697558Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=965.052µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.548628311Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.549572112Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=943.841µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.551128363Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.557500113Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.37121ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.559149598Z level=info msg="Executing migration" id="create dashboard v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.560104158Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=934.25µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.561929729Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.562812218Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=882.598µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.564973719Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.565898239Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=924.7µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.56806186Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.568562447Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=500.287µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.570219722Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.57169873Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.479078ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.573266652Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.573368455Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=102.373µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.57502677Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.577055736Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.028537ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.578857086Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.5808124Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.957554ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.582599568Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.584626436Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.990906ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.586435455Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.587382066Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=945.371µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.58961704Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.591646926Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.035307ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.593141915Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.593849708Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=708.193µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.595248015Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.595871975Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=623.9µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.597821289Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.59784595Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.521µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.599302517Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.599352749Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=47.242µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.600780337Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.602463021Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.682294ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.603901129Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.605483961Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.582312ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.606783274Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.60820238Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.418676ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.609555725Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.610996262Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.440637ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.612366258Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.612576785Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=211.067µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.614146356Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.614797457Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=650.431µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.616602476Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.617261788Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=659.562µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.618563771Z level=info msg="Executing migration" id="Update dashboard title length"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.618586951Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=22.67µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.620001618Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.62064831Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=618.251µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.622065076Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.62280161Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=736.124µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.624541017Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.628739486Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.163958ms
Oct 10 09:47:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.630719971Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.631486046Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=767.105µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.634718543Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.635723845Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.004382ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.63800688Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.638945331Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=937.881µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.640984888Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct 10 09:47:54 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.64136352Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=378.952µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.642944383Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.643562513Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=618.3µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.645487887Z level=info msg="Executing migration" id="Add check_sum column"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.64711371Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.624762ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.648771665Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.649381744Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=609.919µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.650898884Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.651087101Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=189.207µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.652653552Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.652807917Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=171.296µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.654520864Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.655375951Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=852.887µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.657310856Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.659483417Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.170161ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.661302476Z level=info msg="Executing migration" id="create data_source table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.662260768Z level=info msg="Migration successfully executed" id="create data_source table" duration=957.272µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.664277364Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.665018609Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=741.375µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.667313514Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.667970076Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=656.742µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.670118306Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.671048557Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=930.761µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.672481934Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.673231649Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=752.645µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.674852852Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.679919308Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.053286ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.6823968Z level=info msg="Executing migration" id="create data_source table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.683445164Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.049974ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.6854331Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.686138993Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=706.643µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.687806128Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.688509501Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=705.563µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.690490286Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.691085036Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=595.78µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.692630157Z level=info msg="Executing migration" id="Add column with_credentials"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.694513248Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.883162ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.695982087Z level=info msg="Executing migration" id="Add secure json data column"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.69790631Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.923063ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.699450431Z level=info msg="Executing migration" id="Update data_source table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.699469442Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=19.911µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.701119876Z level=info msg="Executing migration" id="Update initial version to 1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.701360804Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=197.196µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.702715278Z level=info msg="Executing migration" id="Add read_only data column"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.704569379Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.854191ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.705929054Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.706087689Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=159.145µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.707592449Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.707748584Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=148.405µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.709206561Z level=info msg="Executing migration" id="Add uid column"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.71098904Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.782609ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.712516481Z level=info msg="Executing migration" id="Update uid value"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.712718957Z level=info msg="Migration successfully executed" id="Update uid value" duration=152.995µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.714156394Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.714835037Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=677.913µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.716253024Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.716956676Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=703.692µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.718811948Z level=info msg="Executing migration" id="create api_key table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.71950541Z level=info msg="Migration successfully executed" id="create api_key table" duration=692.892µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.72193667Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.722598042Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=661.102µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.724334939Z level=info msg="Executing migration" id="add index api_key.key"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.72499694Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=654.511µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.726981496Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.727657028Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=676.922µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.729813269Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.730551383Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=735.414µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.732084404Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.732928622Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=844.248µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.73440861Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.736142017Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.721106ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.738748112Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.750877882Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=12.114889ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.753776877Z level=info msg="Executing migration" id="create api_key table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.754923535Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.147818ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.756992923Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.75810728Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.090206ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.760562081Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.761658406Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.098126ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.763477066Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.764544221Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.067295ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.768280264Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.768943776Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=665.072µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.770695993Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.771668876Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=968.902µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.773514176Z level=info msg="Executing migration" id="Update api_key table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.773550807Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=41.881µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.775485291Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.778773449Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.285158ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.780674022Z level=info msg="Executing migration" id="Add service account foreign key"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.784446465Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.767773ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.786949078Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.787204576Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=256.648µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.789085518Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.792565912Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.479775ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.794554768Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.798140286Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.582057ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.80040819Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.801740964Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.333124ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.803689558Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.804874457Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.185379ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.806902934Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.808089473Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.186099ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.809960404Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.811128053Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.167759ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.813553133Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.814936648Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.382905ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.817195312Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.818600269Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.410147ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.820968276Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.82108963Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=121.224µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.823241281Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.823280812Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=41.381µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.825463185Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.829365823Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.903017ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.831150581Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.835096661Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.94435ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.837827731Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.837934924Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=100.123µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.839994542Z level=info msg="Executing migration" id="create quota table v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.841205092Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.2113ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.843861869Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.844988777Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.124258ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.847476318Z level=info msg="Executing migration" id="Update quota table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.84751362Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.762µs
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 10.7 scrub starts
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 10.7 scrub ok
Oct 10 09:47:54 compute-0 ceph-mon[73551]: pgmap v72: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 632 B/s, 20 objects/s recovering
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 11.d deep-scrub starts
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 11.d deep-scrub ok
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 7.19 scrub starts
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 7.19 scrub ok
Oct 10 09:47:54 compute-0 ceph-mon[73551]: 10.1b scrub starts
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.84965634Z level=info msg="Executing migration" id="create plugin_setting table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.850830638Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.172318ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.854181248Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.855244284Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.070376ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.857054503Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.859423091Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.366997ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.860983202Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.861007763Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.621µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.862486431Z level=info msg="Executing migration" id="create session table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.863247877Z level=info msg="Migration successfully executed" id="create session table" duration=760.036µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.865093577Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.86517155Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=78.293µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.867222238Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.867379213Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=161.935µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.869160671Z level=info msg="Executing migration" id="create playlist table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.869936737Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=776.026µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.871734806Z level=info msg="Executing migration" id="create playlist item table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.872417309Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=681.422µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.874016291Z level=info msg="Executing migration" id="Update playlist table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.874035551Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=19.34µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.875481739Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.87550193Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=19.991µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.876907736Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.879246593Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.339256ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.880991651Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.883731761Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.74122ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.885693045Z level=info msg="Executing migration" id="drop preferences table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.885784318Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=91.843µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.888177866Z level=info msg="Executing migration" id="drop preferences table v3"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.88826984Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=92.233µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.890641637Z level=info msg="Executing migration" id="create preferences table v3"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.891647991Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.003434ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.893677497Z level=info msg="Executing migration" id="Update preferences table charset"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.893706808Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=30.021µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.895480366Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.89862857Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.147474ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.900334106Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.900500272Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=186.696µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.902145256Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.905214647Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.067681ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.906766457Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.909200578Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.432791ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.910944345Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.911000537Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=56.122µs
Oct 10 09:47:54 compute-0 podman[98774]: 2025-10-10 09:47:54.910157119 +0000 UTC m=+0.041798955 container create 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.913441087Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.914208743Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=767.556µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.916280851Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.917097108Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=816.138µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.920052765Z level=info msg="Executing migration" id="create alert table v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.921197862Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.144317ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.923063164Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.923850019Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=788.945µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.925566796Z level=info msg="Executing migration" id="add index alert state"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.926358582Z level=info msg="Migration successfully executed" id="add index alert state" duration=789.005µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.928495543Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.929761) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674929835, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 251, "total_data_size": 1233064, "memory_usage": 1258192, "flush_reason": "Manual Compaction"}
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.935414559Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=6.917277ms
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674937671, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1187808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6917, "largest_seqno": 7920, "table_properties": {"data_size": 1182537, "index_size": 2603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13502, "raw_average_key_size": 21, "raw_value_size": 1171020, "raw_average_value_size": 1847, "num_data_blocks": 115, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089648, "oldest_key_time": 1760089648, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7948 microseconds, and 4741 cpu microseconds.
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.938047026Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.937720) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1187808 bytes OK
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.937745) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.938874) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.938909) EVENT_LOG_v1 {"time_micros": 1760089674938900, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.938935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1227837, prev total WAL file size 1227837, number of live WAL files 2.
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.939623028Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.571102ms
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.939640) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1159KB)], [20(10MB)]
Oct 10 09:47:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674939842, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12570774, "oldest_snapshot_seqno": -1}
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.942252355Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.943964101Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.711546ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.946286017Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.947243869Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=952.841µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.949312146Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct 10 09:47:54 compute-0 systemd[1]: Started libpod-conmon-651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c.scope.
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.957655661Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.335964ms
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.959468681Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.960175164Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=706.062µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.961954243Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.962721167Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=766.374µs
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.964727354Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.965029944Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=302.49µs
Oct 10 09:47:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:47:54 compute-0 podman[98774]: 2025-10-10 09:47:54.891437234 +0000 UTC m=+0.023079090 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.990101978Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct 10 09:47:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:54.990946235Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=846.157µs
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3075 keys, 11334748 bytes, temperature: kUnknown
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675012111, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11334748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11310277, "index_size": 15658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 79155, "raw_average_key_size": 25, "raw_value_size": 11249769, "raw_average_value_size": 3658, "num_data_blocks": 685, "num_entries": 3075, "num_filter_entries": 3075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.011964877Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.013051592Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.090075ms
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.012397) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11334748 bytes
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.015785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.8 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(20.1) write-amplify(9.5) OK, records in: 3602, records dropped: 527 output_compression: NoCompression
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.015805) EVENT_LOG_v1 {"time_micros": 1760089675015795, "job": 6, "event": "compaction_finished", "compaction_time_micros": 72339, "compaction_time_cpu_micros": 26752, "output_level": 6, "num_output_files": 1, "total_output_size": 11334748, "num_input_records": 3602, "num_output_records": 3075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.015960438Z level=info msg="Executing migration" id="Add column is_default"
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675016075, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 10 09:47:55 compute-0 podman[98774]: 2025-10-10 09:47:55.016860958 +0000 UTC m=+0.148502834 container init 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675017741, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:54.939539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.017836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.017843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.017845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.017849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:47:55.017851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.019118472Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.157404ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.020592771Z level=info msg="Executing migration" id="Add column frequency"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.023701723Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.110402ms
Oct 10 09:47:55 compute-0 podman[98774]: 2025-10-10 09:47:55.024360244 +0000 UTC m=+0.156002070 container start 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.025240193Z level=info msg="Executing migration" id="Add column send_reminder"
Oct 10 09:47:55 compute-0 podman[98774]: 2025-10-10 09:47:55.027206498 +0000 UTC m=+0.158848354 container attach 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.028533162Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.291819ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.030210406Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct 10 09:47:55 compute-0 gracious_khayyam[98791]: 0 0
Oct 10 09:47:55 compute-0 podman[98774]: 2025-10-10 09:47:55.031672815 +0000 UTC m=+0.163314671 container died 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:55 compute-0 systemd[1]: libpod-651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c.scope: Deactivated successfully.
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.033375651Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.164315ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.035094298Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.035866762Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=772.394µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.037942181Z level=info msg="Executing migration" id="Update alert table charset"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.037965572Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=23.281µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.039402249Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.03942675Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=22.741µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.040903199Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.041532449Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=628.99µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.043311998Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.044095093Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=784.765µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.04641983Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.047199775Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=782.445µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.048924062Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.049759719Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=834.947µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.051411524Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.052311353Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=900.209µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.054067041Z level=info msg="Executing migration" id="Add for to alert table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.057831766Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.762104ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.059432958Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.062121066Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.687598ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.063906194Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.06407164Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=165.556µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.065780127Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.066464939Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=684.551µs
Oct 10 09:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-daae4583572e18563e5de78d9fca3ea9d94d5c4cfa5fa8fe27fbec436b123560-merged.mount: Deactivated successfully.
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.071310749Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.072446515Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.139447ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.074164692Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.077621365Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.452283ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.080284533Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.080389316Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=103.143µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.082758695Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.083589462Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=830.087µs
Oct 10 09:47:55 compute-0 podman[98774]: 2025-10-10 09:47:55.083784338 +0000 UTC m=+0.215426174 container remove 651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c (image=quay.io/ceph/haproxy:2.3, name=gracious_khayyam)
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.085235796Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.086181377Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=945.221µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.088267376Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.08837882Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=109.264µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.089851848Z level=info msg="Executing migration" id="create annotation table v5"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.090576712Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=723.694µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.092788104Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.093629382Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=830.558µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.097245751Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.098047577Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=800.956µs
Oct 10 09:47:55 compute-0 systemd[1]: libpod-conmon-651614635b4ab1684b1a1809cba9e7460405b6e727cf0db4a21e84d2dc5c119c.scope: Deactivated successfully.
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.154728331Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.15653465Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.809059ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.195179801Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.197233549Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=2.055908ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.201345274Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.203152624Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.80703ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.223086468Z level=info msg="Executing migration" id="Update annotation table charset"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.2231454Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=62.742µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.2638807Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.272450432Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=8.572872ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.358938856Z level=info msg="Executing migration" id="Drop category_id index"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.360228898Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.294492ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.413905893Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.417973427Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.071174ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.420918184Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.421638388Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=719.814µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.423776168Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.424622985Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=846.787µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.427359276Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.428188273Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=829.267µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.430661664Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.43905835Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.393146ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.441252213Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.441997977Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=748.774µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.443675513Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.444560841Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=885.038µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.44694487Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.447306531Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=361.381µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.449016028Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.449621958Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=606.2µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.451298313Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.451561812Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=264.209µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.453205975Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.456251946Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.045371ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.45821491Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.461166007Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.950877ms
Oct 10 09:47:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 469 B/s, 17 objects/s recovering
Oct 10 09:47:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct 10 09:47:55 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.560890196Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.563079388Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=2.193302ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.566499731Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.568507446Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=2.007785ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.571593668Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.571935789Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=344.441µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.574834365Z level=info msg="Executing migration" id="Add epoch_end column"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.57866131Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.827285ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.580571734Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.581511034Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=942.301µs
Oct 10 09:47:55 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.583866762Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.584067949Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=201.657µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.586007312Z level=info msg="Executing migration" id="Move region to single row"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.586654184Z level=info msg="Migration successfully executed" id="Move region to single row" duration=646.202µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.588439992Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.589227099Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=787.886µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.590681646Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.591380509Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=699.063µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.593024373Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct 10 09:47:55 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.594050487Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.025684ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.595734242Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.596490667Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=756.075µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.59809308Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.598785412Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=691.942µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.600144838Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.600834231Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=689.322µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.602292148Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.602433133Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=143.905µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.604390447Z level=info msg="Executing migration" id="create test_data table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.605189233Z level=info msg="Migration successfully executed" id="create test_data table" duration=798.716µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.60721581Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.607979205Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=763.475µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.609526495Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.610237989Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=711.524µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.611974877Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.612895176Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=919.929µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.61450286Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.614731907Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=229.337µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.616229486Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.616664341Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=434.895µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.61818202Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.618285674Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=104.294µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.619902557Z level=info msg="Executing migration" id="create team table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.620768735Z level=info msg="Migration successfully executed" id="create team table" duration=866.268µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.622584745Z level=info msg="Executing migration" id="add index team.org_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.62366148Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.077285ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.625496011Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.626541805Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.045024ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.628360785Z level=info msg="Executing migration" id="Add column uid in team"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.632562233Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.193708ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.63427252Z level=info msg="Executing migration" id="Update uid column values in team"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.634515598Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=245.908µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.635951385Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.636966299Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.016673ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.639007675Z level=info msg="Executing migration" id="create team member table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.639828773Z level=info msg="Migration successfully executed" id="create team member table" duration=821.648µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.642076856Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.642887293Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=808.417µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.644815706Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.645686685Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=871.019µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.647510345Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.648283411Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=773.476µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.65009347Z level=info msg="Executing migration" id="Add column email to team table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.653584255Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.488665ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.683824599Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.689116303Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.293464ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.690969614Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.694344425Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.373681ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.696460254Z level=info msg="Executing migration" id="create dashboard acl table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.697351234Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=892.16µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.70027342Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.701098497Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=825.417µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.702825034Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.703698493Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=870.949µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.705487311Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.706278527Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=791.326µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.709130631Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.709932568Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=802.727µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.711671755Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.71242892Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=757.145µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.770146057Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.772121972Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.979675ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.774757629Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.775562435Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=805.076µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.777389115Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.777947114Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=558.379µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.779598569Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.779818956Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=222.808µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.782037789Z level=info msg="Executing migration" id="create tag table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.782837335Z level=info msg="Migration successfully executed" id="create tag table" duration=802.827µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.784711577Z level=info msg="Executing migration" id="add index tag.key_value"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.785473642Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=761.836µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.787459757Z level=info msg="Executing migration" id="create login attempt table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.78815273Z level=info msg="Migration successfully executed" id="create login attempt table" duration=690.573µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.790189587Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.791075116Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=885.759µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.792938507Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.793900509Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=962.172µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.795265043Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.807989162Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.719629ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.81006026Z level=info msg="Executing migration" id="create login_attempt v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.810851636Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=791.196µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.812401827Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.813140831Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=738.204µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.81522224Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.81552453Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=302.16µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.816892594Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.817697191Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=804.937µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.819640924Z level=info msg="Executing migration" id="create user auth table"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.820629827Z level=info msg="Migration successfully executed" id="create user auth table" duration=988.703µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.822791518Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.823587315Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=795.587µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.825487517Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.82558734Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=99.303µs
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.827439482Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.831092041Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.655219ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.832468857Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.836036545Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.567507ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.837393779Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.840879263Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.483254ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.842361782Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.846009732Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.64772ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:47:55.888Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001820155s
Oct 10 09:47:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.958210811Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.959734131Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.511459ms
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.971154096Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct 10 09:47:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:55.976463541Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.311825ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.030787538Z level=info msg="Executing migration" id="create server_lock table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.032594827Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.810849ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.11906808Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 10.1b scrub ok
Oct 10 09:47:56 compute-0 ceph-mon[73551]: Deploying daemon haproxy.rgw.default.compute-0.ofnenu on compute-0
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 8.e scrub starts
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 8.e scrub ok
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 12.15 scrub starts
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 12.15 scrub ok
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 12.7 deep-scrub starts
Oct 10 09:47:56 compute-0 ceph-mon[73551]: 12.7 deep-scrub ok
Oct 10 09:47:56 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.120415325Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.352265ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.123113603Z level=info msg="Executing migration" id="create user auth token table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.123853868Z level=info msg="Migration successfully executed" id="create user auth token table" duration=739.965µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.126812915Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.127598441Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=785.945µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.131556361Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.133836566Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.281175ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.145991016Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.147877948Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.887571ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.151619131Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.160064709Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.444838ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8080016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.217791686Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.219571706Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.780839ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.223760013Z level=info msg="Executing migration" id="create cache_data table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.225415117Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.654014ms
Oct 10 09:47:56 compute-0 systemd[1]: Reloading.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.243244574Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.244959209Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.716426ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.261757362Z level=info msg="Executing migration" id="create short_url table v1"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.264216974Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=2.458701ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.284315284Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.286144234Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.82921ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.29059431Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.290738515Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=141.655µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.294794759Z level=info msg="Executing migration" id="delete alert_definition table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.295050257Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=256.608µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.298150279Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.300244198Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=2.09553ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.303699531Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.305891804Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.192613ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.310168654Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.312525261Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.355297ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.316060998Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.316174922Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=116.114µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.325686295Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.328160176Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.471011ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.337213894Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.339894842Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=2.682968ms
Oct 10 09:47:56 compute-0 systemd-sysv-generator[98842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.351099381Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.3535212Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.41893ms
Oct 10 09:47:56 compute-0 systemd-rc-local-generator[98836]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.357649915Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.358774643Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.124737ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.36084325Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.36691813Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.07342ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.370068834Z level=info msg="Executing migration" id="drop alert_definition table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.371245072Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.175448ms
Oct 10 09:47:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 09:47:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.374485609Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.374583882Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=98.993µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.377065534Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.378217632Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.148059ms
Oct 10 09:47:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.413897945Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.414926879Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.032514ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.41737903Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.41862023Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.24088ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.421091882Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.421143554Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=52.352µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.424659079Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.425561498Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=901.87µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.427512042Z level=info msg="Executing migration" id="create alert_instance table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.428240187Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=727.725µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.430052056Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.431067229Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.014113ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.433800539Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.435255017Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.457728ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.442404382Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.448739351Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.334019ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.450563021Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.451604745Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.041515ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.453469866Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.454614854Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.144228ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.456742764Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.484166746Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.416061ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.486871074Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.509694075Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.818351ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.522405003Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.523426857Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.023724ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.525550006Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.526365373Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=815.607µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.528656918Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.534135428Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.47721ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.536679352Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.54145576Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.775738ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.542978319Z level=info msg="Executing migration" id="create alert_rule table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.543808747Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=829.848µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.54726209Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.548375437Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.115387ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.550483646Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.551308223Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=826.987µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.554125236Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.555202782Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.077436ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.558088227Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.558144478Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=55.981µs
Oct 10 09:47:56 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.560553348Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.567015869Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.451242ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.568767638Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct 10 09:47:56 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.573793903Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.025396ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.57554153Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.580859324Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.316804ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.582719426Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.583508223Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=788.176µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.585091974Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.585996764Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=904.48µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.587460352Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.591474164Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.012852ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.594564196Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.598733103Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.165686ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.600560082Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.60138989Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=829.138µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.603742187Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct 10 09:47:56 compute-0 systemd[1]: Reloading.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.607993797Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.24678ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.609951372Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.614376078Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.416836ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.616556389Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.616625821Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=70.032µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.618342847Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.619425893Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.082886ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.621267524Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.622101201Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=835.607µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.624308814Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.625148592Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=839.518µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.627045034Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.627100235Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=54.751µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.628569093Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.633529057Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.959604ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.635185931Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.64002553Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.837709ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.641626903Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.646238455Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.610821ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.64790971Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.652869533Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.955233ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.654699693Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.659375986Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.676253ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.660970069Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.661023091Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=53.262µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.662770058Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.663462271Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=689.873µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.669083236Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.673646336Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.56436ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.675394434Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.675443245Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=49.731µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.677173112Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.681759512Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.58584ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.683263162Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.68408305Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=819.778µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.686337754Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct 10 09:47:56 compute-0 systemd-sysv-generator[98886]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.691011827Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.672824ms
Oct 10 09:47:56 compute-0 systemd-rc-local-generator[98883]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.692783106Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.693440067Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=657.171µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.695501985Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.696265339Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=762.834µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.698399979Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.703378674Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.977585ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.704998657Z level=info msg="Executing migration" id="create provenance_type table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.705606787Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=609.73µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.70753898Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.708284825Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=745.575µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.711446329Z level=info msg="Executing migration" id="create alert_image table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.712047909Z level=info msg="Migration successfully executed" id="create alert_image table" duration=602.73µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.715272744Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.71603325Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=760.336µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.717998614Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.718056336Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=58.072µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.722175092Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.723007809Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=832.037µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.725043226Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.725876164Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=829.018µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.727566419Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.727917551Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.729551354Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.729970878Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=418.934µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.731655744Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.732417668Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=764.344µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.734148116Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.739396528Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.247212ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.741249939Z level=info msg="Executing migration" id="create library_element table v1"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.742171109Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=921.2µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.744457664Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.74552045Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.062446ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.748055233Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.749014994Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=959.681µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.750892516Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.75193944Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.046314ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.753825573Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.754638529Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=812.856µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.756452469Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.7564716Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=19.861µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.758128894Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.758176145Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=47.911µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.759842681Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.760071237Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=228.466µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.761719472Z level=info msg="Executing migration" id="create data_keys table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.762530299Z level=info msg="Migration successfully executed" id="create data_keys table" duration=810.637µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.764740412Z level=info msg="Executing migration" id="create secrets table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.765376323Z level=info msg="Migration successfully executed" id="create secrets table" duration=635.911µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.767399809Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.792695311Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=25.290542ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.795069419Z level=info msg="Executing migration" id="add name column into data_keys"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.800612091Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.542772ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.802573405Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.802740091Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=168.076µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.804838449Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.835312152Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=30.463393ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.83737402Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.868418921Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.03852ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.870602242Z level=info msg="Executing migration" id="create kv_store table v1"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.871468461Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=865.549µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.874977776Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.875891016Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=914.08µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.877884911Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.878073518Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=188.447µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.879782075Z level=info msg="Executing migration" id="create permission table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.88058633Z level=info msg="Migration successfully executed" id="create permission table" duration=804.386µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.884050894Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.885146991Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.101197ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.887382634Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.888190201Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=806.797µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.890076673Z level=info msg="Executing migration" id="create role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.890796977Z level=info msg="Migration successfully executed" id="create role table" duration=720.004µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.892813173Z level=info msg="Executing migration" id="add column display_name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.899778072Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.940638ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.901931293Z level=info msg="Executing migration" id="add column group_name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.908682434Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.746681ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.910819755Z level=info msg="Executing migration" id="add index role.org_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.911863279Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.045684ms
Oct 10 09:47:56 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.ofnenu for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.913958508Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.914959001Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.000213ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.91707689Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.918105484Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.028104ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.920463242Z level=info msg="Executing migration" id="create team role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.921236397Z level=info msg="Migration successfully executed" id="create team role table" duration=773.195µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.923288835Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.924251807Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=960.211µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.926369186Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.927347679Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=979.202µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.929302463Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.930152091Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=849.269µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.932471087Z level=info msg="Executing migration" id="create user role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.933131639Z level=info msg="Migration successfully executed" id="create user role table" duration=660.882µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.935462935Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.936384335Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=921.56µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.938611379Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.939601171Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=989.302µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.94168372Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.942758385Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.075995ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.946093285Z level=info msg="Executing migration" id="create builtin role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.947208572Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.119048ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.949260679Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.950274563Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.013943ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.951987859Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.952803856Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=815.728µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.954735329Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.961129839Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.39199ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.963060813Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.963901751Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=840.938µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.965772202Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.966682442Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=912.11µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.968430259Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.969240836Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=810.407µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.970805637Z level=info msg="Executing migration" id="add unique index role.uid"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.971582173Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=776.836µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.972916736Z level=info msg="Executing migration" id="create seed assignment table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.973559238Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=642.222µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.975362637Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.976173343Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=810.216µs
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.977964713Z level=info msg="Executing migration" id="add column hidden to role table"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.983725742Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.757869ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.985295534Z level=info msg="Executing migration" id="permission kind migration"
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.991772477Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.474213ms
Oct 10 09:47:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.993503563Z level=info msg="Executing migration" id="permission attribute migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:56.999898264Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.391981ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.001658532Z level=info msg="Executing migration" id="permission identifier migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.007231605Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.567414ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.009096507Z level=info msg="Executing migration" id="add permission identifier index"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.009980925Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=883.808µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.012769977Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.013692218Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=919.46µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.016425647Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.017231394Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=805.577µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.01894783Z level=info msg="Executing migration" id="create query_history table v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.019662174Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=714.514µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.021296667Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.022157126Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=860.149µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.024133941Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.024189082Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=55.541µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.025795156Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.025825507Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=30.801µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.027537903Z level=info msg="Executing migration" id="teams permissions migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.027895345Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=356.372µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.030055815Z level=info msg="Executing migration" id="dashboard permissions"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.030512261Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=457.786µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.03203881Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.03263035Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=590.89µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.034661227Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.034817642Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=156.156µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.036345483Z level=info msg="Executing migration" id="alerting notification permissions"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.036720195Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=373.282µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.038655748Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.039400283Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=744.305µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.041584265Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.042451514Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=864.348µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.044340275Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.049814985Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.47422ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.05148168Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.051535551Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=54.621µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.053228518Z level=info msg="Executing migration" id="create correlation table v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.054114956Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=885.978µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.05604177Z level=info msg="Executing migration" id="add index correlations.uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.056837396Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=795.716µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.059812264Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.06062191Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=809.556µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.062221104Z level=info msg="Executing migration" id="add correlation config column"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.068464648Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.239034ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.074262859Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.075225041Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=962.672µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.077280238Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.078781518Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.50535ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.080634099Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.100671228Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.032119ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.102745256Z level=info msg="Executing migration" id="create correlation v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.103914574Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.170569ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.105637371Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.106582122Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=945.241µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.108532476Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.109443056Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=910.29µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.111151742Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.11199765Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=845.708µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.113913413Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.114153081Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=239.678µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.115516866Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.116217928Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=703.582µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.117508551Z level=info msg="Executing migration" id="add provisioning column"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.124037965Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.526454ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.126107354Z level=info msg="Executing migration" id="create entity_events table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.126822337Z level=info msg="Migration successfully executed" id="create entity_events table" duration=715.833µs
Oct 10 09:47:57 compute-0 ceph-mon[73551]: pgmap v73: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 469 B/s, 17 objects/s recovering
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 9.c scrub starts
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 9.c scrub ok
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 7.c scrub starts
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 7.c scrub ok
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 12.4 scrub starts
Oct 10 09:47:57 compute-0 ceph-mon[73551]: 12.4 scrub ok
Oct 10 09:47:57 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 09:47:57 compute-0 ceph-mon[73551]: osdmap e68: 3 total, 3 up, 3 in
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.130392505Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.131261043Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=869.859µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.137920623Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.138426049Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.140107184Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.140511528Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.142281946Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.143110043Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=827.507µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.144831069Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.145636526Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=805.197µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.147831148Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.148816031Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=984.683µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.150895289Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 10 09:47:57 compute-0 podman[98941]: 2025-10-10 09:47:57.151093096 +0000 UTC m=+0.039478650 container create 7886d0bbca6f8b440812540cf652925c0ea256ace4a2ff565e6576b7e7e63b15 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-rgw-default-compute-0-ofnenu)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.151739307Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=843.548µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.153470864Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.155520951Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.049817ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.157393773Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.158474618Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.083435ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.160218206Z level=info msg="Executing migration" id="Drop public config table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.161227178Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.009072ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.172886782Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.174299089Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.415537ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.17861469Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.180852294Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.240664ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.183719308Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.185033292Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.314424ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.186611353Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.187507063Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=895.68µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.18956292Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct 10 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fe4ed197222a80b46e62b2d356e68075eedfe5fb39a8c684550f47bdaf075e/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:57 compute-0 podman[98941]: 2025-10-10 09:47:57.208478232 +0000 UTC m=+0.096863806 container init 7886d0bbca6f8b440812540cf652925c0ea256ace4a2ff565e6576b7e7e63b15 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-rgw-default-compute-0-ofnenu)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.212423952Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.853122ms
Oct 10 09:47:57 compute-0 podman[98941]: 2025-10-10 09:47:57.214673956 +0000 UTC m=+0.103059520 container start 7886d0bbca6f8b440812540cf652925c0ea256ace4a2ff565e6576b7e7e63b15 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-rgw-default-compute-0-ofnenu)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.214863522Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct 10 09:47:57 compute-0 bash[98941]: 7886d0bbca6f8b440812540cf652925c0ea256ace4a2ff565e6576b7e7e63b15
Oct 10 09:47:57 compute-0 podman[98941]: 2025-10-10 09:47:57.133418984 +0000 UTC m=+0.021804568 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.22176003Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.895187ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.22361428Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct 10 09:47:57 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.ofnenu for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-rgw-default-compute-0-ofnenu[98956]: [NOTICE] 282/094757 (2) : New worker #1 (4) forked
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.236290916Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=12.667106ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.239310556Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.239679988Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=370.412µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.241379795Z level=info msg="Executing migration" id="add share column"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.2512754Z level=info msg="Migration successfully executed" id="add share column" duration=9.892184ms
Oct 10 09:47:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:47:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000067s ======
Oct 10 09:47:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.258734235Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.259096347Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=362.762µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.260944598Z level=info msg="Executing migration" id="create file table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.262188588Z level=info msg="Migration successfully executed" id="create file table" duration=1.24249ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.264314988Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.265630942Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.315604ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.267579356Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.268771425Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.1927ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.270516402Z level=info msg="Executing migration" id="create file_meta table"
Oct 10 09:47:57 compute-0 sudo[98707]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.271445923Z level=info msg="Migration successfully executed" id="create file_meta table" duration=929.831µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.273419528Z level=info msg="Executing migration" id="file table idx: path key"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.274794403Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.373805ms
Oct 10 09:47:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.279285041Z level=info msg="Executing migration" id="set path collation in file table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.279434956Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=149.755µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.28140352Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.281472232Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=69.562µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.283685375Z level=info msg="Executing migration" id="managed permissions migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.284270224Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=585.579µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.28625719Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.286537529Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=280.339µs
Oct 10 09:47:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.288591297Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct 10 09:47:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.290192009Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.600322ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.292002898Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct 10 09:47:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.30268564Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.674402ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.305769342Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.306016549Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=250.877µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.307805209Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct 10 09:47:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.308863133Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.056634ms
Oct 10 09:47:57 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.mhdkdo on compute-2
Oct 10 09:47:57 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.mhdkdo on compute-2
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.312574775Z level=info msg="Executing migration" id="update group index for alert rules"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.313167195Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=597.45µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.315230402Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.315501661Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=271.409µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.31728564Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.317812167Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=526.637µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.319615887Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.329203112Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.551604ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.331312061Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.341962572Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.647161ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.34404613Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.345607251Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.562731ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.347348339Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.424707772Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.347443ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.426899584Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.427901938Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.002874ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.429641214Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.430800553Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.158249ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.432964264Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.455433213Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.464068ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.458209684Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.464993847Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.778213ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.46689412Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.467223491Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=330.261µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.468884995Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.46903999Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=155.475µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.470466287Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.470648153Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=181.906µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.472243535Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.472426242Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=183.036µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.473896939Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.474057655Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=160.686µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.476050271Z level=info msg="Executing migration" id="create folder table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.476886348Z level=info msg="Migration successfully executed" id="create folder table" duration=836.177µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.478541433Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.479611817Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.069944ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.481525761Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.482542284Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.016973ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.484591112Z level=info msg="Executing migration" id="Update folder title length"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.484611893Z level=info msg="Migration successfully executed" id="Update folder title length" duration=22.091µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.486032449Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.487543559Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.51045ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.489683659Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.49095622Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.273461ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.492861794Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.494031702Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.178209ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.496280055Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.497001759Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=722.934µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.498684155Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.499097858Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=417.413µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.500750283Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.501934911Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.184498ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.503502343Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.504504657Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.003174ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.506195192Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.507089892Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=896µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.509394347Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.510508783Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.113126ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.512136377Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.513094388Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=958.461µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.514836746Z level=info msg="Executing migration" id="create anon_device table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.515832569Z level=info msg="Migration successfully executed" id="create anon_device table" duration=996.033µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.51739681Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.518799196Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.402106ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.521116233Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.522228889Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.113386ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.5243854Z level=info msg="Executing migration" id="create signing_key table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.525666072Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.278053ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.527772881Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.529024653Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.253182ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.531017578Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.531941159Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=927.321µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.533272422Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct 10 09:47:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.533490109Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=218.327µs
Oct 10 09:47:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct 10 09:47:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.535108963Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.541347537Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.233984ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.543165508Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.54382957Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=665.842µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.54537961Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.546301041Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=921.181µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.548235174Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.549105313Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=870.119µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.550653434Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.551539513Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=884.769µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.553331322Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.554252132Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=922.46µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.555822043Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.556684602Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=862.279µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.558106368Z level=info msg="Executing migration" id="create sso_setting table"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.559113022Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.005614ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.56119806Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.561952705Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=753.155µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.563645021Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.563901979Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=257.769µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.566719552Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.566800444Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=81.532µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.568570773Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct 10 09:47:57 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.576776603Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=8.20228ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.578555491Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct 10 09:47:57 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.586835663Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.272882ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.588590072Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.588973973Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=380.641µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=migrator t=2025-10-10T09:47:57.590517905Z level=info msg="migrations completed" performed=547 skipped=0 duration=3.221008254s
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore t=2025-10-10T09:47:57.592068755Z level=info msg="Created default organization"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=secrets t=2025-10-10T09:47:57.59402036Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=plugin.store t=2025-10-10T09:47:57.614197723Z level=info msg="Loading plugins..."
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=local.finder t=2025-10-10T09:47:57.694531995Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=plugin.store t=2025-10-10T09:47:57.694567996Z level=info msg="Plugins loaded" count=55 duration=80.372253ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=query_data t=2025-10-10T09:47:57.697467722Z level=info msg="Query Service initialization"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=live.push_http t=2025-10-10T09:47:57.700287434Z level=info msg="Live Push Gateway initialization"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.migration t=2025-10-10T09:47:57.703277103Z level=info msg=Starting
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.migration t=2025-10-10T09:47:57.703729917Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.migration orgID=1 t=2025-10-10T09:47:57.704071188Z level=info msg="Migrating alerts for organisation"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.migration orgID=1 t=2025-10-10T09:47:57.704688429Z level=info msg="Alerts found to migrate" alerts=0
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.migration t=2025-10-10T09:47:57.70622502Z level=info msg="Completed alerting migration"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.state.manager t=2025-10-10T09:47:57.724274163Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=infra.usagestats.collector t=2025-10-10T09:47:57.726102882Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=provisioning.datasources t=2025-10-10T09:47:57.727265331Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=provisioning.alerting t=2025-10-10T09:47:57.73759567Z level=info msg="starting to provision alerting"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=provisioning.alerting t=2025-10-10T09:47:57.737622021Z level=info msg="finished to provision alerting"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.state.manager t=2025-10-10T09:47:57.737706734Z level=info msg="Warming state cache for startup"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.multiorg.alertmanager t=2025-10-10T09:47:57.73788326Z level=info msg="Starting MultiOrg Alertmanager"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.state.manager t=2025-10-10T09:47:57.738022925Z level=info msg="State cache has been initialized" states=0 duration=316.221µs
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ngalert.scheduler t=2025-10-10T09:47:57.738062546Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ticker t=2025-10-10T09:47:57.738103997Z level=info msg=starting first_tick=2025-10-10T09:48:00Z
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=grafanaStorageLogger t=2025-10-10T09:47:57.738259663Z level=info msg="Storage starting"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=http.server t=2025-10-10T09:47:57.741103386Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=http.server t=2025-10-10T09:47:57.741488779Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore.transactions t=2025-10-10T09:47:57.749511273Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=provisioning.dashboard t=2025-10-10T09:47:57.758478998Z level=info msg="starting to provision dashboards"
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=plugins.update.checker t=2025-10-10T09:47:57.830224186Z level=info msg="Update check succeeded" duration=91.704915ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=grafana.update.checker t=2025-10-10T09:47:57.832074398Z level=info msg="Update check succeeded" duration=90.469415ms
Oct 10 09:47:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore.transactions t=2025-10-10T09:47:57.884109179Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=provisioning.dashboard t=2025-10-10T09:47:58.061013586Z level=info msg="finished to provision dashboards"
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 11.2 scrub starts
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 11.2 scrub ok
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 7.d scrub starts
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 7.d scrub ok
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 8.c scrub starts
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 8.c scrub ok
Oct 10 09:47:58 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: Deploying daemon haproxy.rgw.default.compute-2.mhdkdo on compute-2
Oct 10 09:47:58 compute-0 ceph-mon[73551]: pgmap v75: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 10 09:47:58 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 7.1 scrub starts
Oct 10 09:47:58 compute-0 ceph-mon[73551]: 7.1 scrub ok
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=grafana-apiserver t=2025-10-10T09:47:58.209232189Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=grafana-apiserver t=2025-10-10T09:47:58.209823409Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8080016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:58 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Oct 10 09:47:58 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Oct 10 09:47:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:47:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 26 completed events
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event dd9f8c4c-cebe-46e2-a2b7-89c99295a6b8 (Global Recovery Event) in 5 seconds
Oct 10 09:47:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:47:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:47:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:47:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct 10 09:47:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.bbeizy on compute-2
Oct 10 09:47:58 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.bbeizy on compute-2
Oct 10 09:47:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:47:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:47:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 8.1 deep-scrub starts
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 8.1 deep-scrub ok
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 12.11 scrub starts
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 12.11 scrub ok
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 09:47:59 compute-0 ceph-mon[73551]: osdmap e69: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-0 ceph-mon[73551]: osdmap e70: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 7.7 scrub starts
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 7.7 scrub ok
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:59 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:59 compute-0 ceph-mon[73551]: Deploying daemon keepalived.rgw.default.compute-2.bbeizy on compute-2
Oct 10 09:47:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 10 09:47:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct 10 09:47:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 09:47:59 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct 10 09:47:59 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct 10 09:48:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 10 09:48:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 09:48:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 10 09:48:00 compute-0 ceph-mon[73551]: 9.0 scrub starts
Oct 10 09:48:00 compute-0 ceph-mon[73551]: 9.0 scrub ok
Oct 10 09:48:00 compute-0 ceph-mon[73551]: 7.14 scrub starts
Oct 10 09:48:00 compute-0 ceph-mon[73551]: 7.14 scrub ok
Oct 10 09:48:00 compute-0 ceph-mon[73551]: osdmap e71: 3 total, 3 up, 3 in
Oct 10 09:48:00 compute-0 ceph-mon[73551]: pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:00 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 09:48:00 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 72 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=72) [0] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 10 09:48:00 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 72 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=72) [0] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:00 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 72 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=72) [0] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:00 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 72 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=72) [0] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:00 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 10 09:48:00 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 10 09:48:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:48:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8080016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:48:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:48:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.igkrok on compute-0
Oct 10 09:48:00 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.igkrok on compute-0
Oct 10 09:48:00 compute-0 sudo[98978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:00 compute-0 sudo[98978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:00 compute-0 sudo[98978]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:00 compute-0 sudo[99003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:00 compute-0 sudo[99003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:01.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.417485203 +0000 UTC m=+0.060826841 container create 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 10 09:48:01 compute-0 systemd[1]: Started libpod-conmon-056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657.scope.
Oct 10 09:48:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.390044571 +0000 UTC m=+0.033386229 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:48:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 10 09:48:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 10 09:48:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 73 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 8.0 scrub starts
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 8.0 scrub ok
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 12.1d scrub starts
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 12.1d scrub ok
Oct 10 09:48:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 09:48:01 compute-0 ceph-mon[73551]: osdmap e72: 3 total, 3 up, 3 in
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 10.0 scrub starts
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 10.0 scrub ok
Oct 10 09:48:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:48:01 compute-0 ceph-mon[73551]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:48:01 compute-0 ceph-mon[73551]: Deploying daemon keepalived.rgw.default.compute-0.igkrok on compute-0
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.513864191 +0000 UTC m=+0.157205839 container init 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.521965899 +0000 UTC m=+0.165307507 container start 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.525501674 +0000 UTC m=+0.168843312 container attach 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Oct 10 09:48:01 compute-0 relaxed_driscoll[99085]: 0 0
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.530785648 +0000 UTC m=+0.174127276 container died 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.buildah.version=1.28.2)
Oct 10 09:48:01 compute-0 systemd[1]: libpod-056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657.scope: Deactivated successfully.
Oct 10 09:48:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 2 objects/s recovering
Oct 10 09:48:01 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 10 09:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b75820b4acea2675267361a5bc05b2d95bf86aa7378eb76efc09b076321b23-merged.mount: Deactivated successfully.
Oct 10 09:48:01 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 10 09:48:01 compute-0 podman[99069]: 2025-10-10 09:48:01.576693038 +0000 UTC m=+0.220034646 container remove 056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_driscoll, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived)
Oct 10 09:48:01 compute-0 systemd[1]: libpod-conmon-056aca23723e20e4d97024ef235499d78536f9eeb10f1d1082fdb6c5ce318657.scope: Deactivated successfully.
Oct 10 09:48:01 compute-0 systemd[1]: Reloading.
Oct 10 09:48:01 compute-0 systemd-rc-local-generator[99134]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:48:01 compute-0 systemd-sysv-generator[99138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:48:01 compute-0 systemd[1]: Reloading.
Oct 10 09:48:02 compute-0 systemd-rc-local-generator[99175]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:48:02 compute-0 systemd-sysv-generator[99179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:02 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.igkrok for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 9.1 scrub starts
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 9.1 scrub ok
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 9.5 scrub starts
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 9.5 scrub ok
Oct 10 09:48:02 compute-0 ceph-mon[73551]: osdmap e73: 3 total, 3 up, 3 in
Oct 10 09:48:02 compute-0 ceph-mon[73551]: pgmap v82: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 2 objects/s recovering
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 8.7 scrub starts
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 8.7 scrub ok
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 10.8 scrub starts
Oct 10 09:48:02 compute-0 ceph-mon[73551]: 10.8 scrub ok
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 10 09:48:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:02 compute-0 podman[99233]: 2025-10-10 09:48:02.583508304 +0000 UTC m=+0.049808399 container create 4a8847cd6df92a4059a2228786bd054638f579613a4892a9dde8e84d67728ec4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 09:48:02 compute-0 podman[99233]: 2025-10-10 09:48:02.565068768 +0000 UTC m=+0.031368893 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4f4a12f99b8f24ccca21b4468b0e898468564fc5e7c771345ba202ccc14121/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:02 compute-0 podman[99233]: 2025-10-10 09:48:02.672218321 +0000 UTC m=+0.138518506 container init 4a8847cd6df92a4059a2228786bd054638f579613a4892a9dde8e84d67728ec4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793)
Oct 10 09:48:02 compute-0 podman[99233]: 2025-10-10 09:48:02.681289739 +0000 UTC m=+0.147589874 container start 4a8847cd6df92a4059a2228786bd054638f579613a4892a9dde8e84d67728ec4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, version=2.2.4, vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64)
Oct 10 09:48:02 compute-0 bash[99233]: 4a8847cd6df92a4059a2228786bd054638f579613a4892a9dde8e84d67728ec4
Oct 10 09:48:02 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.igkrok for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Starting VRRP child process, pid=4
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: Startup complete
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:48:02 2025: (VI_0) Entering BACKUP STATE
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: (VI_0) Entering BACKUP STATE (init)
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:02 2025: VRRP_Script(check_backend) succeeded
Oct 10 09:48:02 compute-0 sudo[99003]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:48:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:02 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev 57b94cb1-bb87-4a01-89e1-75a3fc43a869 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 10 09:48:02 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 57b94cb1-bb87-4a01-89e1-75a3fc43a869 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Oct 10 09:48:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 10 09:48:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:02 compute-0 ceph-mgr[73845]: [progress INFO root] update: starting ev fa9c87c6-70f5-43fb-bbfb-cd51e5b2d917 (Updating prometheus deployment (+1 -> 1))
Oct 10 09:48:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:02.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct 10 09:48:03 compute-0 sudo[99256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:03 compute-0 sudo[99256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:03 compute-0 sudo[99256]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:03 compute-0 sudo[99281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:03 compute-0 sudo[99281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:48:03 2025: (VI_0) Entering MASTER STATE
Oct 10 09:48:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 10 09:48:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.5( v 74'1104 (0'0,74'1104] local-lis/les=0/0 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=67'1101 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.5( v 74'1104 (0'0,74'1104] local-lis/les=0/0 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=67'1101 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:03 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 75 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 1 objects/s recovering
Oct 10 09:48:03 compute-0 ceph-mon[73551]: 9.18 scrub starts
Oct 10 09:48:03 compute-0 ceph-mon[73551]: 9.18 scrub ok
Oct 10 09:48:03 compute-0 ceph-mon[73551]: 10.10 scrub starts
Oct 10 09:48:03 compute-0 ceph-mon[73551]: 10.10 scrub ok
Oct 10 09:48:03 compute-0 ceph-mon[73551]: osdmap e74: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-0 ceph-mon[73551]: osdmap e75: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 28 completed events
Oct 10 09:48:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:48:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-0 ceph-mgr[73845]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Oct 10 09:48:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:48:04 2025: (VI_0) received an invalid passwd!
Oct 10 09:48:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:04 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct 10 09:48:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 10 09:48:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 10 09:48:04 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 10 09:48:04 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 76 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:04 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 76 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:04 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 76 pg[10.5( v 74'1104 (0'0,74'1104] local-lis/les=75/76 n=6 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=74'1104 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:04 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 76 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=73/65 les/c/f=74/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:04 compute-0 ceph-mon[73551]: 11.19 scrub starts
Oct 10 09:48:04 compute-0 ceph-mon[73551]: 11.19 scrub ok
Oct 10 09:48:04 compute-0 ceph-mon[73551]: Deploying daemon prometheus.compute-0 on compute-0
Oct 10 09:48:04 compute-0 ceph-mon[73551]: 10.18 scrub starts
Oct 10 09:48:04 compute-0 ceph-mon[73551]: 10.18 scrub ok
Oct 10 09:48:04 compute-0 ceph-mon[73551]: pgmap v85: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 1 objects/s recovering
Oct 10 09:48:04 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:04 compute-0 ceph-mon[73551]: osdmap e76: 3 total, 3 up, 3 in
Oct 10 09:48:04 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 10 09:48:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:04 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 10 09:48:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:04.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj[97413]: Fri Oct 10 09:48:05 2025: (VI_0) received an invalid passwd!
Oct 10 09:48:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:05 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct 10 09:48:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 262 B/s, 10 objects/s recovering
Oct 10 09:48:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct 10 09:48:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 09:48:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 10 09:48:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 09:48:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 10 09:48:05 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 10 09:48:05 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Oct 10 09:48:05 compute-0 ceph-mon[73551]: 8.5 scrub starts
Oct 10 09:48:05 compute-0 ceph-mon[73551]: 8.5 scrub ok
Oct 10 09:48:05 compute-0 ceph-mon[73551]: 10.15 scrub starts
Oct 10 09:48:05 compute-0 ceph-mon[73551]: 10.15 scrub ok
Oct 10 09:48:05 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 09:48:05 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Oct 10 09:48:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.488887787s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 active pruub 202.341033936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.488837242s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 202.341033936s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490736008s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 active pruub 202.343551636s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490676880s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 202.343551636s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490692139s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 active pruub 202.343841553s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490645409s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 202.343841553s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490342140s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 active pruub 202.343872070s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 77 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=9.490221977s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 202.343872070s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-rgw-default-compute-0-igkrok[99248]: Fri Oct 10 09:48:06 2025: (VI_0) Entering MASTER STATE
Oct 10 09:48:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 9.9 scrub starts
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 9.9 scrub ok
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 12.f deep-scrub starts
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 12.f deep-scrub ok
Oct 10 09:48:06 compute-0 ceph-mon[73551]: pgmap v87: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 262 B/s, 10 objects/s recovering
Oct 10 09:48:06 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 09:48:06 compute-0 ceph-mon[73551]: osdmap e77: 3 total, 3 up, 3 in
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 10.d deep-scrub starts
Oct 10 09:48:06 compute-0 ceph-mon[73551]: 10.d deep-scrub ok
Oct 10 09:48:06 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 10 09:48:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 10 09:48:06 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 78 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 10 09:48:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:06 compute-0 podman[99346]: 2025-10-10 09:48:06.895900394 +0000 UTC m=+3.224128346 volume create 972041af3899eb4c597d1fbed5e142ecc1d61e43090e0e3255352a56461fdc94
Oct 10 09:48:06 compute-0 podman[99346]: 2025-10-10 09:48:06.875770302 +0000 UTC m=+3.203998294 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 10 09:48:06 compute-0 podman[99346]: 2025-10-10 09:48:06.908258641 +0000 UTC m=+3.236486603 container create 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:06 compute-0 systemd[1]: Started libpod-conmon-6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f.scope.
Oct 10 09:48:06 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5cfdfad14870814e537302fce4fb64849fab597113eefc82d8c71d79f46cce4/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.006582063 +0000 UTC m=+3.334810045 container init 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.019440066 +0000 UTC m=+3.347668048 container start 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.024794652 +0000 UTC m=+3.353022604 container attach 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 bold_franklin[99601]: 65534 65534
Oct 10 09:48:07 compute-0 systemd[1]: libpod-6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f.scope: Deactivated successfully.
Oct 10 09:48:07 compute-0 conmon[99601]: conmon 6bfbef5303694a7f571b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f.scope/container/memory.events
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.027951166 +0000 UTC m=+3.356179108 container died 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5cfdfad14870814e537302fce4fb64849fab597113eefc82d8c71d79f46cce4-merged.mount: Deactivated successfully.
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.089938084 +0000 UTC m=+3.418166026 container remove 6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f (image=quay.io/prometheus/prometheus:v2.51.0, name=bold_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99346]: 2025-10-10 09:48:07.095687323 +0000 UTC m=+3.423915285 volume remove 972041af3899eb4c597d1fbed5e142ecc1d61e43090e0e3255352a56461fdc94
Oct 10 09:48:07 compute-0 systemd[1]: libpod-conmon-6bfbef5303694a7f571b46bb1adaa6dbbe0416ba78d0741a3bea44e579415b8f.scope: Deactivated successfully.
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.176229872 +0000 UTC m=+0.046453469 volume create 0fc8468fd3de9da68faf09edab583cd012db88ad2d1f8d49de24d1ac7f963d92
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.182662044 +0000 UTC m=+0.052885641 container create f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 systemd[1]: Started libpod-conmon-f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d.scope.
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.156201053 +0000 UTC m=+0.026424680 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 10 09:48:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495a0b2f71f7352000372aff4f2c0e3eea8dc0d2679c055cfada4dc445dc8878/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.274605627 +0000 UTC m=+0.144829314 container init f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.285224886 +0000 UTC m=+0.155448523 container start f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 exciting_hawking[99635]: 65534 65534
Oct 10 09:48:07 compute-0 systemd[1]: libpod-f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d.scope: Deactivated successfully.
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.293004992 +0000 UTC m=+0.163228709 container attach f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.293449577 +0000 UTC m=+0.163673264 container died f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-495a0b2f71f7352000372aff4f2c0e3eea8dc0d2679c055cfada4dc445dc8878-merged.mount: Deactivated successfully.
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.356165169 +0000 UTC m=+0.226388806 container remove f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_hawking, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:07 compute-0 podman[99618]: 2025-10-10 09:48:07.361591638 +0000 UTC m=+0.231815275 volume remove 0fc8468fd3de9da68faf09edab583cd012db88ad2d1f8d49de24d1ac7f963d92
Oct 10 09:48:07 compute-0 systemd[1]: libpod-conmon-f199b5a50916fffe01e04617c631aa97456cff7624b9114053895b51f86c682d.scope: Deactivated successfully.
Oct 10 09:48:07 compute-0 systemd[1]: Reloading.
Oct 10 09:48:07 compute-0 systemd-sysv-generator[99685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:48:07 compute-0 systemd-rc-local-generator[99682]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:48:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 10 objects/s recovering
Oct 10 09:48:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct 10 09:48:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 09:48:07 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 10 09:48:07 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 10 09:48:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 7.5 scrub starts
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 7.5 scrub ok
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 7.0 scrub starts
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 7.0 scrub ok
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 10.12 scrub starts
Oct 10 09:48:07 compute-0 ceph-mon[73551]: osdmap e78: 3 total, 3 up, 3 in
Oct 10 09:48:07 compute-0 ceph-mon[73551]: 10.12 scrub ok
Oct 10 09:48:07 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 09:48:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 09:48:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 10 09:48:07 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 10 09:48:07 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 79 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:07 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 79 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:07 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 79 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:07 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 79 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:07 compute-0 systemd[1]: Reloading.
Oct 10 09:48:07 compute-0 sshd-session[99690]: Accepted publickey for zuul from 192.168.122.30 port 51662 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:48:07 compute-0 systemd-rc-local-generator[99719]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:48:07 compute-0 systemd-sysv-generator[99722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:48:08 compute-0 systemd-logind[806]: New session 38 of user zuul.
Oct 10 09:48:08 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 10 09:48:08 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:08 compute-0 sshd-session[99690]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204835892s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 51'1091 active pruub 210.236068726s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204742432s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 210.236068726s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204613686s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 51'1091 active pruub 210.236160278s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204546928s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 210.236160278s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204686165s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 51'1091 active pruub 210.236373901s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204529762s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 210.236373901s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204237938s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 51'1091 active pruub 210.236358643s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=78/79 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.204055786s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 210.236358643s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-0 podman[99845]: 2025-10-10 09:48:08.476312651 +0000 UTC m=+0.082224864 container create fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06defca5860ccb934f9399db98cebecff37cd03247212561a768d8d15e47ebf/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06defca5860ccb934f9399db98cebecff37cd03247212561a768d8d15e47ebf/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:08 compute-0 podman[99845]: 2025-10-10 09:48:08.437309708 +0000 UTC m=+0.043222011 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 10 09:48:08 compute-0 podman[99845]: 2025-10-10 09:48:08.537866035 +0000 UTC m=+0.143778278 container init fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:08 compute-0 podman[99845]: 2025-10-10 09:48:08.542635812 +0000 UTC m=+0.148548025 container start fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:08 compute-0 bash[99845]: fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:08 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:08 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Oct 10 09:48:08 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.588Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.588Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.588Z caller=main.go:623 level=info host_details="(Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 x86_64 compute-0 (none))"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.588Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.588Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.591Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.592Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.595Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.595Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.600Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.600Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.43µs
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.600Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.601Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.601Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=328.001µs wal_replay_duration=324.72µs wbl_replay_duration=310ns total_replay_duration=690.132µs
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.603Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.603Z caller=main.go:1153 level=info msg="TSDB started"
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.603Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct 10 09:48:08 compute-0 sudo[99281]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.640Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=36.718177ms db_storage=2.1µs remote_storage=3.231µs web_handler=1.14µs query_engine=2.37µs scrape=7.386922ms scrape_sd=305.09µs notify=33.121µs notify_sd=21.58µs rules=27.96556ms tracing=24.441µs
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.640Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0[99898]: ts=2025-10-10T09:48:08.640Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 12.2 deep-scrub starts
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 12.2 deep-scrub ok
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 12.d scrub starts
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 12.d scrub ok
Oct 10 09:48:08 compute-0 ceph-mon[73551]: pgmap v90: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 10 objects/s recovering
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 9.4 scrub starts
Oct 10 09:48:08 compute-0 ceph-mon[73551]: 9.4 scrub ok
Oct 10 09:48:08 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 09:48:08 compute-0 ceph-mon[73551]: osdmap e79: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-0 ceph-mon[73551]: osdmap e80: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:08 compute-0 ceph-mgr[73845]: [progress INFO root] complete: finished ev fa9c87c6-70f5-43fb-bbfb-cd51e5b2d917 (Updating prometheus deployment (+1 -> 1))
Oct 10 09:48:08 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event fa9c87c6-70f5-43fb-bbfb-cd51e5b2d917 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 10 09:48:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:08 compute-0 ceph-mgr[73845]: [progress INFO root] Writing back 29 completed events
Oct 10 09:48:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 10 09:48:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:08 compute-0 ceph-mgr[73845]: [progress INFO root] Completed event 71715aa5-cc02-46ab-a050-c23942ac4516 (Global Recovery Event) in 5 seconds
Oct 10 09:48:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:08.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:09 compute-0 python3.9[99966]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 10 09:48:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 10 09:48:09 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 1 active+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/214 objects misplaced (7.009%)
Oct 10 09:48:09 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 10 09:48:09 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 9.7 deep-scrub starts
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 9.7 deep-scrub ok
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 12.5 scrub starts
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 12.5 scrub ok
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 11.18 deep-scrub starts
Oct 10 09:48:09 compute-0 ceph-mon[73551]: 11.18 deep-scrub ok
Oct 10 09:48:09 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 10 09:48:09 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-0 ceph-mon[73551]: osdmap e81: 3 total, 3 up, 3 in
Oct 10 09:48:09 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 10 09:48:09 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.xkdepb(active, since 96s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:09 compute-0 sshd-session[91343]: Connection closed by 192.168.122.100 port 49384
Oct 10 09:48:09 compute-0 sshd-session[91317]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:48:09 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 10 09:48:09 compute-0 systemd[1]: session-36.scope: Consumed 53.093s CPU time.
Oct 10 09:48:09 compute-0 systemd-logind[806]: Session 36 logged out. Waiting for processes to exit.
Oct 10 09:48:09 compute-0 systemd-logind[806]: Removed session 36.
Oct 10 09:48:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setuser ceph since I am not root
Oct 10 09:48:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ignoring --setgroup ceph since I am not root
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: pidfile_write: ignore empty --pid-file
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'alerts'
Oct 10 09:48:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:09.996+0000 7f50292d3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:48:09 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'balancer'
Oct 10 09:48:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:10.081+0000 7f50292d3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-0 ceph-mgr[73845]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'cephadm'
Oct 10 09:48:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:10 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 10 09:48:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 10 09:48:10 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 10 09:48:10 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Oct 10 09:48:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:10 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:10 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Oct 10 09:48:10 compute-0 sudo[100210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eijnamsjwthzufqbvmyxjqgcbuwszegl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089690.0878801-56-16911655398667/AnsiballZ_command.py'
Oct 10 09:48:10 compute-0 sudo[100210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 8.b scrub starts
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 8.b scrub ok
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 12.0 scrub starts
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 12.0 scrub ok
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 9.1a scrub starts
Oct 10 09:48:10 compute-0 ceph-mon[73551]: 9.1a scrub ok
Oct 10 09:48:10 compute-0 ceph-mon[73551]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 10 09:48:10 compute-0 ceph-mon[73551]: mgrmap e28: compute-0.xkdepb(active, since 96s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:10 compute-0 ceph-mon[73551]: osdmap e82: 3 total, 3 up, 3 in
Oct 10 09:48:10 compute-0 python3.9[100212]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:48:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:10 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'crash'
Oct 10 09:48:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:10.984+0000 7f50292d3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-0 ceph-mgr[73845]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'dashboard'
Oct 10 09:48:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:11.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 10 09:48:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 10 09:48:11 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 10 09:48:11 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:48:11 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:11.628+0000 7f50292d3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 12.1e scrub starts
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 12.1e scrub ok
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 12.1f scrub starts
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 12.1f scrub ok
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 9.1b deep-scrub starts
Oct 10 09:48:11 compute-0 ceph-mon[73551]: 9.1b deep-scrub ok
Oct 10 09:48:11 compute-0 ceph-mon[73551]: osdmap e83: 3 total, 3 up, 3 in
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]:   from numpy import show_config as show_numpy_config
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:11.805+0000 7f50292d3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'influx'
Oct 10 09:48:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:11.873+0000 7f50292d3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'insights'
Oct 10 09:48:11 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'iostat'
Oct 10 09:48:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:12.007+0000 7f50292d3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:48:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:12 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'localpool'
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:48:12 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 10 09:48:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:12 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:12 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'mirroring'
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 11.8 scrub starts
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 11.8 scrub ok
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 7.17 scrub starts
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 8.1a scrub starts
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 7.17 scrub ok
Oct 10 09:48:12 compute-0 ceph-mon[73551]: 8.1a scrub ok
Oct 10 09:48:12 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'nfs'
Oct 10 09:48:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:12 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.014+0000 7f50292d3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.234+0000 7f50292d3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:48:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:13.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.315+0000 7f50292d3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'osd_support'
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.381+0000 7f50292d3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:48:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.467+0000 7f50292d3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'progress'
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.539+0000 7f50292d3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'prometheus'
Oct 10 09:48:13 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 10 09:48:13 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 8.a scrub starts
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 8.a scrub ok
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 12.1b scrub starts
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 12.1b scrub ok
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 11.6 scrub starts
Oct 10 09:48:13 compute-0 ceph-mon[73551]: 11.6 scrub ok
Oct 10 09:48:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:13.901+0000 7f50292d3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:48:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:14.005+0000 7f50292d3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-0 ceph-mgr[73845]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'restful'
Oct 10 09:48:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:14 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:14 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rgw'
Oct 10 09:48:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:14.462+0000 7f50292d3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-0 ceph-mgr[73845]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'rook'
Oct 10 09:48:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:14 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:14 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 10 09:48:14 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 9.16 scrub starts
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 9.16 scrub ok
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 12.16 deep-scrub starts
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 9.19 scrub starts
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 12.16 deep-scrub ok
Oct 10 09:48:14 compute-0 ceph-mon[73551]: 9.19 scrub ok
Oct 10 09:48:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:14 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.030+0000 7f50292d3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'selftest'
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.100+0000 7f50292d3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.183+0000 7f50292d3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'stats'
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'status'
Oct 10 09:48:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:15.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.339+0000 7f50292d3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telegraf'
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.408+0000 7f50292d3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'telemetry'
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.571+0000 7f50292d3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:48:15 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 10 09:48:15 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 9.b deep-scrub starts
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 9.b deep-scrub ok
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 12.14 scrub starts
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 9.1e scrub starts
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 12.14 scrub ok
Oct 10 09:48:15 compute-0 ceph-mon[73551]: 9.1e scrub ok
Oct 10 09:48:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:15.803+0000 7f50292d3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'volumes'
Oct 10 09:48:15 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:48:15 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rfugxc started
Oct 10 09:48:15 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:48:15 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gkrssp started
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.074+0000 7f50292d3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr[py] Loading python module 'zabbix'
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.141+0000 7f50292d3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xkdepb restarted
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xkdepb
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: ms_deliver_dispatch: unhandled message 0x55aa72ead860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr handle_mgr_map Activating!
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.xkdepb(active, starting, since 0.0333518s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr handle_mgr_map I am now activating
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 all = 0
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 all = 0
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 all = 0
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).mds e10 all = 1
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: balancer
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : Manager daemon compute-0.xkdepb is now available
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:48:16
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:16 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: cephadm
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: crash
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: dashboard
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO sso] Loading SSO DB version=1
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: devicehealth
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: iostat
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: nfs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: orchestrator
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: pg_autoscaler
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: progress
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [progress INFO root] Loading...
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f4fade990a0>, <progress.module.GhostEvent object at 0x7f4fade990d0>, <progress.module.GhostEvent object at 0x7f4fade99100>, <progress.module.GhostEvent object at 0x7f4fade99130>, <progress.module.GhostEvent object at 0x7f4fade99160>, <progress.module.GhostEvent object at 0x7f4fade99190>, <progress.module.GhostEvent object at 0x7f4fade991c0>, <progress.module.GhostEvent object at 0x7f4fade991f0>, <progress.module.GhostEvent object at 0x7f4fade99220>, <progress.module.GhostEvent object at 0x7f4fade99250>, <progress.module.GhostEvent object at 0x7f4fade99280>, <progress.module.GhostEvent object at 0x7f4fade992b0>, <progress.module.GhostEvent object at 0x7f4fade992e0>, <progress.module.GhostEvent object at 0x7f4fade99310>, <progress.module.GhostEvent object at 0x7f4fade99340>, <progress.module.GhostEvent object at 0x7f4fade99370>, <progress.module.GhostEvent object at 0x7f4fade993a0>, <progress.module.GhostEvent object at 0x7f4fade993d0>, <progress.module.GhostEvent object at 0x7f4fade99400>, <progress.module.GhostEvent object at 0x7f4fade99430>, <progress.module.GhostEvent object at 0x7f4fade99460>, <progress.module.GhostEvent object at 0x7f4fade99490>, <progress.module.GhostEvent object at 0x7f4fade994c0>, <progress.module.GhostEvent object at 0x7f4fade994f0>, <progress.module.GhostEvent object at 0x7f4fade99520>, <progress.module.GhostEvent object at 0x7f4fade99550>, <progress.module.GhostEvent object at 0x7f4fade99580>, <progress.module.GhostEvent object at 0x7f4fade995b0>, <progress.module.GhostEvent object at 0x7f4fade995e0>] historic events
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: prometheus
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO root] Cache enabled
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO root] starting metric collection thread
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO root] Starting engine...
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: CherryPy Checker:
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: The Application mounted at '' has an empty config.
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:16] ENGINE Bus STARTING
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] recovery thread starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] starting setup
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: rbd_support
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: restful
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: status
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: telemetry
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [restful WARNING root] server not running: no certificate configured
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] PerfHandler: starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: mgr load Constructed class from module: volumes
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.380+0000 7f4f8f9db640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.387+0000 7f4f96368640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.387+0000 7f4f96368640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.387+0000 7f4f96368640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.387+0000 7f4f96368640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T09:48:16.387+0000 7f4f96368640 -1 client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: client.0 error registering admin socket command: (17) File exists
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TaskHandler: starting
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"} v 0)
Oct 10 09:48:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] setup complete
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:16] ENGINE Serving on http://:::9283
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:16] ENGINE Bus STARTED
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [prometheus INFO root] Engine started.
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:16 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 10 09:48:16 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 10 09:48:16 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 10 09:48:16 compute-0 sshd-session[100398]: Accepted publickey for ceph-admin from 192.168.122.100 port 50036 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:48:16 compute-0 systemd-logind[806]: New session 39 of user ceph-admin.
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 11.13 scrub starts
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 11.13 scrub ok
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 9.1f scrub starts
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 9.1f scrub ok
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 12.1 scrub starts
Oct 10 09:48:16 compute-0 ceph-mon[73551]: 12.1 scrub ok
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Activating manager daemon compute-0.xkdepb
Oct 10 09:48:16 compute-0 ceph-mon[73551]: osdmap e84: 3 total, 3 up, 3 in
Oct 10 09:48:16 compute-0 ceph-mon[73551]: mgrmap e29: compute-0.xkdepb(active, starting, since 0.0333518s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:48:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:48:16 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Oct 10 09:48:16 compute-0 sshd-session[100398]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:48:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:16 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:16 compute-0 ceph-mgr[73845]: [dashboard INFO dashboard.module] Engine started.
Oct 10 09:48:16 compute-0 sudo[100416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:16 compute-0 sudo[100416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:16 compute-0 sudo[100416]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:16 compute-0 sudo[100444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:48:16 compute-0 sudo[100444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:17 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.xkdepb(active, since 1.06227s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:17.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:17] "GET /metrics HTTP/1.1" 200 46569 "" "Prometheus/2.51.0"
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:17] "GET /metrics HTTP/1.1" 200 46569 "" "Prometheus/2.51.0"
Oct 10 09:48:17 compute-0 podman[100546]: 2025-10-10 09:48:17.518804427 +0000 UTC m=+0.063870222 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:48:17 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 10 09:48:17 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 10 09:48:17 compute-0 podman[100546]: 2025-10-10 09:48:17.615891619 +0000 UTC m=+0.160957424 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 11.a scrub starts
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 11.a scrub ok
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 8.1e scrub starts
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 8.1e scrub ok
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 9.15 scrub starts
Oct 10 09:48:17 compute-0 ceph-mon[73551]: 9.15 scrub ok
Oct 10 09:48:17 compute-0 ceph-mon[73551]: mgrmap e30: compute-0.xkdepb(active, since 1.06227s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:48:17] ENGINE Bus STARTING
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:48:17] ENGINE Bus STARTING
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:48:17] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:48:17 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:48:17] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:48:18 compute-0 sudo[100210]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:18 compute-0 podman[100673]: 2025-10-10 09:48:18.099647976 +0000 UTC m=+0.075782233 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:48:18] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:48:18] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:48:18] ENGINE Bus STARTED
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:48:18] ENGINE Bus STARTED
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: [cephadm INFO cherrypy.error] [10/Oct/2025:09:48:18] ENGINE Client ('192.168.122.100', 53560) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : [10/Oct/2025:09:48:18] ENGINE Client ('192.168.122.100', 53560) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:48:18 compute-0 podman[100673]: 2025-10-10 09:48:18.135894877 +0000 UTC m=+0.112029174 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 09:48:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:18 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:18 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 09:48:18 compute-0 sshd-session[99737]: Connection closed by 192.168.122.30 port 51662
Oct 10 09:48:18 compute-0 sshd-session[99690]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:48:18 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 10 09:48:18 compute-0 systemd[1]: session-38.scope: Consumed 8.614s CPU time.
Oct 10 09:48:18 compute-0 systemd-logind[806]: Session 38 logged out. Waiting for processes to exit.
Oct 10 09:48:18 compute-0 systemd-logind[806]: Removed session 38.
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:18 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 10 09:48:18 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 10 09:48:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:18 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:18 compute-0 podman[100808]: 2025-10-10 09:48:18.610447922 +0000 UTC m=+0.141578077 container exec 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:48:18 compute-0 podman[100808]: 2025-10-10 09:48:18.626945164 +0000 UTC m=+0.158075339 container exec_died 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 10 09:48:18 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 85 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:18 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 85 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.3 scrub starts
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.3 scrub ok
Oct 10 09:48:18 compute-0 ceph-mon[73551]: pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.1c scrub starts
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.1c scrub ok
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.10 scrub starts
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 9.10 scrub ok
Oct 10 09:48:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 09:48:18 compute-0 ceph-mon[73551]: 11.1f scrub starts
Oct 10 09:48:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:18 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:48:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:18 compute-0 podman[100872]: 2025-10-10 09:48:18.911611754 +0000 UTC m=+0.078729759 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:48:18 compute-0 podman[100872]: 2025-10-10 09:48:18.928556132 +0000 UTC m=+0.095674147 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:48:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:19 compute-0 podman[100940]: 2025-10-10 09:48:19.194536458 +0000 UTC m=+0.062226118 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, release=1793)
Oct 10 09:48:19 compute-0 podman[100940]: 2025-10-10 09:48:19.218737003 +0000 UTC m=+0.086426623 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4)
Oct 10 09:48:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:19.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:19 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 10 09:48:19 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 10 09:48:19 compute-0 podman[101005]: 2025-10-10 09:48:19.494185851 +0000 UTC m=+0.069445575 container exec a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:19 compute-0 podman[101005]: 2025-10-10 09:48:19.539646396 +0000 UTC m=+0.114906030 container exec_died a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:48:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:48:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 10 09:48:19 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 10 09:48:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 86 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 86 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 86 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 86 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:19 compute-0 ceph-mon[73551]: [10/Oct/2025:09:48:17] ENGINE Bus STARTING
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 8.9 scrub starts
Oct 10 09:48:19 compute-0 ceph-mon[73551]: [10/Oct/2025:09:48:17] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 8.9 scrub ok
Oct 10 09:48:19 compute-0 ceph-mon[73551]: [10/Oct/2025:09:48:18] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:48:19 compute-0 ceph-mon[73551]: [10/Oct/2025:09:48:18] ENGINE Bus STARTED
Oct 10 09:48:19 compute-0 ceph-mon[73551]: [10/Oct/2025:09:48:18] ENGINE Client ('192.168.122.100', 53560) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:48:19 compute-0 ceph-mon[73551]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 11.1f scrub ok
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 11.12 scrub starts
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 11.12 scrub ok
Oct 10 09:48:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 09:48:19 compute-0 ceph-mon[73551]: osdmap e85: 3 total, 3 up, 3 in
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mgrmap e31: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 11.10 scrub starts
Oct 10 09:48:19 compute-0 ceph-mon[73551]: 11.10 scrub ok
Oct 10 09:48:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 podman[101079]: 2025-10-10 09:48:19.833105195 +0000 UTC m=+0.082700141 container exec 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:48:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:20 compute-0 podman[101079]: 2025-10-10 09:48:20.049357936 +0000 UTC m=+0.298952862 container exec_died 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 09:48:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:20 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:20 compute-0 podman[101194]: 2025-10-10 09:48:20.466000966 +0000 UTC m=+0.057477881 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:20 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 10 09:48:20 compute-0 podman[101194]: 2025-10-10 09:48:20.502932251 +0000 UTC m=+0.094409146 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:20 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 10 09:48:20 compute-0 sudo[100444]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:20 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa81c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 sudo[101237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:20 compute-0 sudo[101237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:20 compute-0 sudo[101237]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:20 compute-0 sudo[101262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:48:20 compute-0 sudo[101262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 10 09:48:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:20 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 09:48:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 10 09:48:20 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 9.8 scrub starts
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 9.8 scrub ok
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 11.1 scrub starts
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 11.1 scrub ok
Oct 10 09:48:20 compute-0 ceph-mon[73551]: osdmap e86: 3 total, 3 up, 3 in
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 8.2 scrub starts
Oct 10 09:48:20 compute-0 ceph-mon[73551]: 8.2 scrub ok
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000067s ======
Oct 10 09:48:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 10 09:48:21 compute-0 sudo[101262]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:21.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:21 compute-0 sudo[101319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:21 compute-0 sudo[101319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 sudo[101319]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 sudo[101344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:48:21 compute-0 sudo[101344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Oct 10 09:48:21 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Oct 10 09:48:21 compute-0 sudo[101344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:48:21 compute-0 sudo[101388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:48:21 compute-0 sudo[101388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 sudo[101388]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-0 sudo[101413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:48:21 compute-0 sudo[101413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 sudo[101413]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:21 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:21 compute-0 ceph-mon[73551]: pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 8.13 scrub starts
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 8.13 scrub ok
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 9.d deep-scrub starts
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 9.d deep-scrub ok
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 09:48:21 compute-0 ceph-mon[73551]: osdmap e87: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 9.17 scrub starts
Oct 10 09:48:21 compute-0 ceph-mon[73551]: 9.17 scrub ok
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: mgrmap e32: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:21 compute-0 ceph-mon[73551]: osdmap e88: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-0 sudo[101438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:21 compute-0 sudo[101438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 sudo[101438]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-0 sudo[101463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:21 compute-0 sudo[101463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-0 sudo[101463]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101488]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101537]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v10: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct 10 09:48:22 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:22 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:22 compute-0 sudo[101562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101562]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:48:22 compute-0 sudo[101588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101588]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 sudo[101613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:22 compute-0 sudo[101613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101613]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:22 compute-0 sudo[101638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101638]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101663]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:22 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa808003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:22 compute-0 sudo[101688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:22 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 10 09:48:22 compute-0 sudo[101688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 10 09:48:22 compute-0 sudo[101713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101713]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 sudo[101761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101761]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:22 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:22 compute-0 sudo[101787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-0 sudo[101787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101787]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 09:48:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 10 09:48:22 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=8.905251503s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=51'1091 mlcod 0'0 active pruub 218.343811035s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=8.905197144s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 218.343811035s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=8.904626846s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=51'1091 mlcod 0'0 active pruub 218.344375610s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=8.904580116s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 218.344375610s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=88/89 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:22 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 89 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=88/89 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88) [0] r=0 lpr=88 pi=[56,88)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 11.11 deep-scrub starts
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 11.11 deep-scrub ok
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 11.f deep-scrub starts
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 11.f deep-scrub ok
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 8.1f scrub starts
Oct 10 09:48:22 compute-0 ceph-mon[73551]: 8.1f scrub ok
Oct 10 09:48:22 compute-0 ceph-mon[73551]: pgmap v10: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 ceph-mon[73551]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 09:48:22 compute-0 ceph-mon[73551]: osdmap e89: 3 total, 3 up, 3 in
Oct 10 09:48:22 compute-0 sudo[101812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-0 sudo[101812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101812]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:22 compute-0 sudo[101837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:48:22 compute-0 sudo[101837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-0 sudo[101837]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[101862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:48:23 compute-0 sudo[101862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[101862]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[101887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[101887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[101887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[101912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:23 compute-0 sudo[101912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[101912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[101937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[101937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[101937]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000067s ======
Oct 10 09:48:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:23.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 sudo[101985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[101985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[101985]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 sudo[102010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[102010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102010]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 10 09:48:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 10 09:48:23 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 10 09:48:23 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 90 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 90 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:23 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 90 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 90 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:23 compute-0 sudo[102035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 sudo[102035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102035]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 sudo[102060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:23 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 10 09:48:23 compute-0 sudo[102060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102060]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 10 09:48:23 compute-0 sudo[102085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:23 compute-0 sudo[102085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102085]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[102110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[102110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102110]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[102135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:23 compute-0 sudo[102135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102135]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 sudo[102160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[102160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102160]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:48:23 compute-0 ceph-mon[73551]: 8.1d scrub starts
Oct 10 09:48:23 compute-0 ceph-mon[73551]: 8.1d scrub ok
Oct 10 09:48:23 compute-0 ceph-mon[73551]: 9.e scrub starts
Oct 10 09:48:23 compute-0 ceph-mon[73551]: 9.e scrub ok
Oct 10 09:48:23 compute-0 ceph-mon[73551]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mon[73551]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mon[73551]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mon[73551]: osdmap e90: 3 total, 3 up, 3 in
Oct 10 09:48:23 compute-0 ceph-mon[73551]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:48:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:23 compute-0 sudo[102208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-0 sudo[102208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-0 sudo[102208]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-0 sudo[102233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-0 sudo[102233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-0 sudo[102233]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-0 sudo[102259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-0 sudo[102259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-0 sudo[102259]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:24 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:24 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 10 09:48:24 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 91 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=5 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:24 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 91 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=6 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[66,90)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:24 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:24 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:24 compute-0 ceph-mon[73551]: 7.b scrub starts
Oct 10 09:48:24 compute-0 ceph-mon[73551]: 7.b scrub ok
Oct 10 09:48:24 compute-0 ceph-mon[73551]: 11.5 scrub starts
Oct 10 09:48:24 compute-0 ceph-mon[73551]: 11.5 scrub ok
Oct 10 09:48:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: pgmap v13: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: osdmap e91: 3 total, 3 up, 3 in
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:48:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:48:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:48:25 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:48:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:48:25 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:48:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:25 compute-0 sudo[102286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:25 compute-0 sudo[102286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:25 compute-0 sudo[102286]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:25 compute-0 sudo[102311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:48:25 compute-0 sudo[102311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 10 09:48:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 10 09:48:25 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 10 09:48:25 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.001409531s) [1] async=[1] r=-1 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 51'1091 active pruub 227.066970825s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:25 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.001303673s) [1] r=-1 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 227.066970825s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:25 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=14.999852180s) [1] async=[1] r=-1 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 51'1091 active pruub 227.066970825s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:25 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=14.999752045s) [1] r=-1 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 227.066970825s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.579246729 +0000 UTC m=+0.045002461 container create 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:48:25 compute-0 systemd[1]: Started libpod-conmon-91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0.scope.
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.559019515 +0000 UTC m=+0.024775267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.689905728 +0000 UTC m=+0.155661550 container init 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.701596963 +0000 UTC m=+0.167352695 container start 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.705675306 +0000 UTC m=+0.171431078 container attach 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:48:25 compute-0 dreamy_germain[102392]: 167 167
Oct 10 09:48:25 compute-0 systemd[1]: libpod-91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0.scope: Deactivated successfully.
Oct 10 09:48:25 compute-0 conmon[102392]: conmon 91791be59b96529e5597 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0.scope/container/memory.events
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.711455997 +0000 UTC m=+0.177211739 container died 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad6ecdd2787fe461520b999d842c860a4881cf7c51c8bdc16682a876ee3a932c-merged.mount: Deactivated successfully.
Oct 10 09:48:25 compute-0 podman[102376]: 2025-10-10 09:48:25.764648576 +0000 UTC m=+0.230404338 container remove 91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 09:48:25 compute-0 systemd[1]: libpod-conmon-91791be59b96529e55972e2c049ae6fa31b1fbfb32e862a0a3280ad2914adbd0.scope: Deactivated successfully.
Oct 10 09:48:25 compute-0 podman[102416]: 2025-10-10 09:48:25.945922846 +0000 UTC m=+0.045528797 container create 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:48:25 compute-0 ceph-mon[73551]: 11.4 scrub starts
Oct 10 09:48:25 compute-0 ceph-mon[73551]: 11.4 scrub ok
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:25 compute-0 ceph-mon[73551]: osdmap e92: 3 total, 3 up, 3 in
Oct 10 09:48:25 compute-0 systemd[1]: Started libpod-conmon-0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c.scope.
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:25.926075103 +0000 UTC m=+0.025681104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:26 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:26.052759339 +0000 UTC m=+0.152365290 container init 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:26.065336993 +0000 UTC m=+0.164942944 container start 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:26.069426067 +0000 UTC m=+0.169032038 container attach 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:48:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:26 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:26 compute-0 brave_feistel[102433]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:48:26 compute-0 brave_feistel[102433]: --> All data devices are unavailable
Oct 10 09:48:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 10 09:48:26 compute-0 systemd[1]: libpod-0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c.scope: Deactivated successfully.
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:26.484526057 +0000 UTC m=+0.584132008 container died 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 10 09:48:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 10 09:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7805270f3bbcebbe5b55b905e32324e0cb0b336ddcf7fd22707ad7982f20c8a9-merged.mount: Deactivated successfully.
Oct 10 09:48:26 compute-0 podman[102416]: 2025-10-10 09:48:26.53115944 +0000 UTC m=+0.630765411 container remove 0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:26 compute-0 systemd[1]: libpod-conmon-0bb4bd7756553eb7e368592bc5fad537e0be6cbdf904fbed5f5e94c4024a2e4c.scope: Deactivated successfully.
Oct 10 09:48:26 compute-0 sudo[102311]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:26 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:26 compute-0 sudo[102462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:26 compute-0 sudo[102462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:26 compute-0 sudo[102462]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:26 compute-0 sudo[102487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:48:26 compute-0 sudo[102487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:26 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:26.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:26 compute-0 ceph-mon[73551]: 11.7 scrub starts
Oct 10 09:48:26 compute-0 ceph-mon[73551]: 11.7 scrub ok
Oct 10 09:48:26 compute-0 ceph-mon[73551]: pgmap v16: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:26 compute-0 ceph-mon[73551]: osdmap e93: 3 total, 3 up, 3 in
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.129678841 +0000 UTC m=+0.046726608 container create 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 09:48:27 compute-0 systemd[1]: Started libpod-conmon-26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e.scope.
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.107916655 +0000 UTC m=+0.024964472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.232618326 +0000 UTC m=+0.149666133 container init 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.241304681 +0000 UTC m=+0.158352448 container start 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.244610929 +0000 UTC m=+0.161658746 container attach 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:48:27 compute-0 silly_edison[102569]: 167 167
Oct 10 09:48:27 compute-0 systemd[1]: libpod-26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e.scope: Deactivated successfully.
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.249188051 +0000 UTC m=+0.166235848 container died 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8519f0ac3f0d99bf7ed340b13142d07da2f2e9fa50515a017e08b3d4fac86f2-merged.mount: Deactivated successfully.
Oct 10 09:48:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:27 compute-0 podman[102553]: 2025-10-10 09:48:27.304124067 +0000 UTC m=+0.221171834 container remove 26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_edison, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:48:27 compute-0 systemd[1]: libpod-conmon-26e195a502836bf78c7fbccfbff77e4d86504fba90072d1d16603f064ef71e6e.scope: Deactivated successfully.
Oct 10 09:48:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:27] "GET /metrics HTTP/1.1" 200 46569 "" "Prometheus/2.51.0"
Oct 10 09:48:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:27] "GET /metrics HTTP/1.1" 200 46569 "" "Prometheus/2.51.0"
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.490694502 +0000 UTC m=+0.051629279 container create 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:48:27 compute-0 systemd[1]: Started libpod-conmon-8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4.scope.
Oct 10 09:48:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cad5216dd9627d619e3cc2de4a837ccc094986d3bb608717d4668ad7f38923/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.473561809 +0000 UTC m=+0.034496616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cad5216dd9627d619e3cc2de4a837ccc094986d3bb608717d4668ad7f38923/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cad5216dd9627d619e3cc2de4a837ccc094986d3bb608717d4668ad7f38923/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cad5216dd9627d619e3cc2de4a837ccc094986d3bb608717d4668ad7f38923/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.5840093 +0000 UTC m=+0.144944077 container init 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.598024921 +0000 UTC m=+0.158959698 container start 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.602188158 +0000 UTC m=+0.163122955 container attach 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]: {
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:     "0": [
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:         {
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "devices": [
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "/dev/loop3"
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             ],
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "lv_name": "ceph_lv0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "lv_size": "21470642176",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "name": "ceph_lv0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "tags": {
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.cluster_name": "ceph",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.crush_device_class": "",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.encrypted": "0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.osd_id": "0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.type": "block",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.vdo": "0",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:                 "ceph.with_tpm": "0"
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             },
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "type": "block",
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:             "vg_name": "ceph_vg0"
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:         }
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]:     ]
Oct 10 09:48:27 compute-0 xenodochial_brattain[102609]: }
Oct 10 09:48:27 compute-0 systemd[1]: libpod-8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4.scope: Deactivated successfully.
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.883490187 +0000 UTC m=+0.444424964 container died 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cad5216dd9627d619e3cc2de4a837ccc094986d3bb608717d4668ad7f38923-merged.mount: Deactivated successfully.
Oct 10 09:48:27 compute-0 podman[102592]: 2025-10-10 09:48:27.932016213 +0000 UTC m=+0.492951010 container remove 8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:48:27 compute-0 systemd[1]: libpod-conmon-8a8d7015ce12d508123c0056b2c437029e177c6935af85802734f6f010bf43c4.scope: Deactivated successfully.
Oct 10 09:48:27 compute-0 sudo[102487]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:27 compute-0 ceph-mon[73551]: 8.4 deep-scrub starts
Oct 10 09:48:27 compute-0 ceph-mon[73551]: 8.4 deep-scrub ok
Oct 10 09:48:28 compute-0 sudo[102631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:28 compute-0 sudo[102631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:28 compute-0 sudo[102631]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:28 compute-0 sudo[102657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:48:28 compute-0 sudo[102657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct 10 09:48:28 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 09:48:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:28 compute-0 podman[102724]: 2025-10-10 09:48:28.589593846 +0000 UTC m=+0.048087273 container create 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:48:28 compute-0 systemd[1]: Started libpod-conmon-01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99.scope.
Oct 10 09:48:28 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:28 compute-0 podman[102724]: 2025-10-10 09:48:28.568298876 +0000 UTC m=+0.026792333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:28 compute-0 podman[102724]: 2025-10-10 09:48:28.68645214 +0000 UTC m=+0.144945607 container init 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 09:48:28 compute-0 podman[102724]: 2025-10-10 09:48:28.69706552 +0000 UTC m=+0.155558967 container start 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:48:28 compute-0 podman[102724]: 2025-10-10 09:48:28.701130773 +0000 UTC m=+0.159624250 container attach 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:48:28 compute-0 frosty_vaughan[102741]: 167 167
Oct 10 09:48:28 compute-0 systemd[1]: libpod-01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99.scope: Deactivated successfully.
Oct 10 09:48:28 compute-0 podman[102746]: 2025-10-10 09:48:28.75943421 +0000 UTC m=+0.037770953 container died 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6e950c4ad4a3785b3994fbda32b1e04702c83c940f919c3e0887175590d3ec-merged.mount: Deactivated successfully.
Oct 10 09:48:28 compute-0 podman[102746]: 2025-10-10 09:48:28.813092774 +0000 UTC m=+0.091429457 container remove 01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:28 compute-0 systemd[1]: libpod-conmon-01f458b309af874921cf1ef4c33d016714e52bd7c8f1a72bab3b64020f2e9b99.scope: Deactivated successfully.
Oct 10 09:48:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:28 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 10 09:48:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 09:48:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 10 09:48:29 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 10 09:48:29 compute-0 ceph-mon[73551]: 11.1b scrub starts
Oct 10 09:48:29 compute-0 ceph-mon[73551]: 11.1b scrub ok
Oct 10 09:48:29 compute-0 ceph-mon[73551]: 8.1c scrub starts
Oct 10 09:48:29 compute-0 ceph-mon[73551]: 8.1c scrub ok
Oct 10 09:48:29 compute-0 ceph-mon[73551]: pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:29 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.045481676 +0000 UTC m=+0.062450914 container create 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:48:29 compute-0 systemd[1]: Started libpod-conmon-9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e.scope.
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.015547702 +0000 UTC m=+0.032516970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f645132ca030f4dff5a2453ab54146b57a5474ae9d8cd8bf43716a08a464cb4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f645132ca030f4dff5a2453ab54146b57a5474ae9d8cd8bf43716a08a464cb4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f645132ca030f4dff5a2453ab54146b57a5474ae9d8cd8bf43716a08a464cb4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f645132ca030f4dff5a2453ab54146b57a5474ae9d8cd8bf43716a08a464cb4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.128210437 +0000 UTC m=+0.145179725 container init 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.140472889 +0000 UTC m=+0.157442167 container start 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.144538553 +0000 UTC m=+0.161507831 container attach 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:48:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:29.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:29 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct 10 09:48:29 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct 10 09:48:29 compute-0 lvm[102859]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:48:29 compute-0 lvm[102859]: VG ceph_vg0 finished
Oct 10 09:48:29 compute-0 lucid_knuth[102785]: {}
Oct 10 09:48:29 compute-0 systemd[1]: libpod-9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e.scope: Deactivated successfully.
Oct 10 09:48:29 compute-0 systemd[1]: libpod-9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e.scope: Consumed 1.222s CPU time.
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.868635413 +0000 UTC m=+0.885604651 container died 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f645132ca030f4dff5a2453ab54146b57a5474ae9d8cd8bf43716a08a464cb4b-merged.mount: Deactivated successfully.
Oct 10 09:48:29 compute-0 podman[102769]: 2025-10-10 09:48:29.919893909 +0000 UTC m=+0.936863147 container remove 9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_knuth, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:29 compute-0 systemd[1]: libpod-conmon-9094105730dac46054452732943350d0ac94035351a541dd175593ad49606d4e.scope: Deactivated successfully.
Oct 10 09:48:29 compute-0 sudo[102657]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 10 09:48:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 10 09:48:30 compute-0 ceph-mon[73551]: 11.1d scrub starts
Oct 10 09:48:30 compute-0 ceph-mon[73551]: 11.1d scrub ok
Oct 10 09:48:30 compute-0 ceph-mon[73551]: 12.1a deep-scrub starts
Oct 10 09:48:30 compute-0 ceph-mon[73551]: 12.1a deep-scrub ok
Oct 10 09:48:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 09:48:30 compute-0 ceph-mon[73551]: osdmap e94: 3 total, 3 up, 3 in
Oct 10 09:48:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 10 09:48:30 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 10 09:48:30 compute-0 sudo[102876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:48:30 compute-0 sudo[102875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:48:30 compute-0 sudo[102876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-0 sudo[102875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-0 sudo[102876]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:30 compute-0 sudo[102875]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct 10 09:48:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 09:48:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:30 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 10 09:48:30 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 10 09:48:30 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 10 09:48:30 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 10 09:48:30 compute-0 sudo[102925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:30 compute-0 sudo[102925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-0 sudo[102925]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:30 compute-0 sudo[102950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:30 compute-0 sudo[102950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:30 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 10 09:48:30 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.812006843 +0000 UTC m=+0.043054986 volume create dc01316260f9fc2842c08c4ba1dd732a983b1de67160b167e474bafdb5411c9e
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.824246876 +0000 UTC m=+0.055295019 container create 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:30 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:30 compute-0 systemd[1]: Started libpod-conmon-9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60.scope.
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.796739181 +0000 UTC m=+0.027787344 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:48:30 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037407ddc5a4577ddae77b2f8245eaa4a772999aefcf8c9ae8988e7b701bb3b7/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.925303729 +0000 UTC m=+0.156351952 container init 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.93416427 +0000 UTC m=+0.165212413 container start 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 zen_mayer[103008]: 65534 65534
Oct 10 09:48:30 compute-0 systemd[1]: libpod-9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60.scope: Deactivated successfully.
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.939275738 +0000 UTC m=+0.170323901 container attach 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.940925162 +0000 UTC m=+0.171973315 container died 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:30.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-037407ddc5a4577ddae77b2f8245eaa4a772999aefcf8c9ae8988e7b701bb3b7-merged.mount: Deactivated successfully.
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.987777202 +0000 UTC m=+0.218825365 container remove 9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:30 compute-0 podman[102991]: 2025-10-10 09:48:30.995863589 +0000 UTC m=+0.226911752 volume remove dc01316260f9fc2842c08c4ba1dd732a983b1de67160b167e474bafdb5411c9e
Oct 10 09:48:31 compute-0 systemd[1]: libpod-conmon-9d0cfc38a1a2b7fd07253006739882b75c7961960f52099e296c3a6dabca8e60.scope: Deactivated successfully.
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 12.1c scrub starts
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 12.1c scrub ok
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 11.1c deep-scrub starts
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 11.1c deep-scrub ok
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 12.17 scrub starts
Oct 10 09:48:31 compute-0 ceph-mon[73551]: 12.17 scrub ok
Oct 10 09:48:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:31 compute-0 ceph-mon[73551]: osdmap e95: 3 total, 3 up, 3 in
Oct 10 09:48:31 compute-0 ceph-mon[73551]: pgmap v21: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 09:48:31 compute-0 ceph-mon[73551]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 10 09:48:31 compute-0 ceph-mon[73551]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 10 09:48:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 10 09:48:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 09:48:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 10 09:48:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.091842984 +0000 UTC m=+0.060400496 volume create 7e10429c3f24e70aaf2fbd685ec2a7662e81e1dacdfe82b36fbcc2f63eb78cd8
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.101385448 +0000 UTC m=+0.069942970 container create 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 systemd[1]: Started libpod-conmon-60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1.scope.
Oct 10 09:48:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda9d88cac8859ee23557c37b06bf415b685525807a2bbdc1e96d2cac293b0c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.078468935 +0000 UTC m=+0.047026447 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.177631146 +0000 UTC m=+0.146188728 container init 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.18475461 +0000 UTC m=+0.153312122 container start 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 objective_pasteur[103042]: 65534 65534
Oct 10 09:48:31 compute-0 systemd[1]: libpod-60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1.scope: Deactivated successfully.
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.191886065 +0000 UTC m=+0.160443667 container attach 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.192275757 +0000 UTC m=+0.160833269 container died 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cda9d88cac8859ee23557c37b06bf415b685525807a2bbdc1e96d2cac293b0c-merged.mount: Deactivated successfully.
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.238489687 +0000 UTC m=+0.207047209 container remove 60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_pasteur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 podman[103026]: 2025-10-10 09:48:31.242701675 +0000 UTC m=+0.211259227 volume remove 7e10429c3f24e70aaf2fbd685ec2a7662e81e1dacdfe82b36fbcc2f63eb78cd8
Oct 10 09:48:31 compute-0 systemd[1]: libpod-conmon-60f9487115dccbb54e9940dedc6ecc15e4fc508f7948d1e4316511e3c5c506b1.scope: Deactivated successfully.
Oct 10 09:48:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:48:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:31.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:31 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[98134]: ts=2025-10-10T09:48:31.546Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct 10 09:48:31 compute-0 podman[103092]: 2025-10-10 09:48:31.556812813 +0000 UTC m=+0.049611573 container died a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 10 09:48:31 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 10 09:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4410f3bb456ba7f0088fed6a62919964611c1ad0c46f266c88475e476aafc196-merged.mount: Deactivated successfully.
Oct 10 09:48:31 compute-0 podman[103092]: 2025-10-10 09:48:31.6080863 +0000 UTC m=+0.100885059 container remove a6bf6d19455d268de5746756717480a933023ec3ac7a20f959a974697c880da6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:31 compute-0 podman[103092]: 2025-10-10 09:48:31.611872435 +0000 UTC m=+0.104671194 volume remove d3d11b6e2c04c16f3fdfb68b83b7136cad02b37c9703134f388fc9cbf8c1997d
Oct 10 09:48:31 compute-0 bash[103092]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0
Oct 10 09:48:31 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@alertmanager.compute-0.service: Deactivated successfully.
Oct 10 09:48:31 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:31 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@alertmanager.compute-0.service: Consumed 1.320s CPU time.
Oct 10 09:48:31 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:32 compute-0 podman[103198]: 2025-10-10 09:48:32.049844455 +0000 UTC m=+0.049953083 volume create fc66a41ab5e2dc2ecc48f34ebba33eb8cbe47e8b7c0d05a6bd40b02a76834947
Oct 10 09:48:32 compute-0 podman[103198]: 2025-10-10 09:48:32.062989127 +0000 UTC m=+0.063097745 container create e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 7.8 scrub starts
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 7.8 scrub ok
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 8.12 scrub starts
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 8.12 scrub ok
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 9.1d scrub starts
Oct 10 09:48:32 compute-0 ceph-mon[73551]: 9.1d scrub ok
Oct 10 09:48:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 09:48:32 compute-0 ceph-mon[73551]: osdmap e96: 3 total, 3 up, 3 in
Oct 10 09:48:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 10 09:48:32 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 10 09:48:32 compute-0 podman[103198]: 2025-10-10 09:48:32.030884852 +0000 UTC m=+0.030993460 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 10 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8faa40d01203e55c5c8abb0dfaf32a3e67e9ac97bad93db81ee848c82e58587a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8faa40d01203e55c5c8abb0dfaf32a3e67e9ac97bad93db81ee848c82e58587a/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:32 compute-0 podman[103198]: 2025-10-10 09:48:32.158172027 +0000 UTC m=+0.158280695 container init e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:32 compute-0 podman[103198]: 2025-10-10 09:48:32.167448653 +0000 UTC m=+0.167557271 container start e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:32 compute-0 bash[103198]: e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a
Oct 10 09:48:32 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct 10 09:48:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.214Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.215Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.224Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.226Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 10 09:48:32 compute-0 sudo[102950]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.264Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 10 09:48:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.265Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.270Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:32.270Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 10 09:48:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 10 09:48:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 10 09:48:32 compute-0 ceph-mgr[73845]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct 10 09:48:32 compute-0 ceph-mgr[73845]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct 10 09:48:32 compute-0 sudo[103235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:32 compute-0 sudo[103235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:32 compute-0 sudo[103235]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:32 compute-0 sudo[103260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:32 compute-0 sudo[103260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:32 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 10 09:48:32 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 10 09:48:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:32 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:32.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.03350351 +0000 UTC m=+0.073873710 container create da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: Started libpod-conmon-da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d.scope.
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.00097682 +0000 UTC m=+0.041347110 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:48:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 7.f scrub starts
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 7.f scrub ok
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 8.19 scrub starts
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 8.19 scrub ok
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 11.16 deep-scrub starts
Oct 10 09:48:33 compute-0 ceph-mon[73551]: 11.16 deep-scrub ok
Oct 10 09:48:33 compute-0 ceph-mon[73551]: osdmap e97: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-0 ceph-mon[73551]: pgmap v24: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 09:48:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:33 compute-0 ceph-mon[73551]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 10 09:48:33 compute-0 ceph-mon[73551]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 10 09:48:33 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 09:48:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 98 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=8 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=11.346649170s) [1] r=-1 lpr=98 pi=[75,98)/1 crt=51'1091 mlcod 0'0 active pruub 231.057723999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 98 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=8 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=11.346286774s) [1] r=-1 lpr=98 pi=[75,98)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 231.057723999s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 98 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=11.352015495s) [1] r=-1 lpr=98 pi=[75,98)/1 crt=51'1091 mlcod 0'0 active pruub 231.064849854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 98 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=11.351851463s) [1] r=-1 lpr=98 pi=[75,98)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 231.064849854s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.133946603 +0000 UTC m=+0.174316853 container init da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.144435638 +0000 UTC m=+0.184805838 container start da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.147682335 +0000 UTC m=+0.188052545 container attach da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 focused_cori[103318]: 472 0
Oct 10 09:48:33 compute-0 systemd[1]: libpod-da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d.scope: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.152437671 +0000 UTC m=+0.192807891 container died da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-623f8e2bc39476b45bc7915bb40e2bf8dcc95eb30c9ea82664c39d0dc0605004-merged.mount: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103302]: 2025-10-10 09:48:33.196123968 +0000 UTC m=+0.236494168 container remove da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d (image=quay.io/ceph/grafana:10.4.0, name=focused_cori, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: libpod-conmon-da38fbca6bb7f72cc9f92af959ab01ad598c4dbace6fb02ed914c07f8986877d.scope: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.282214488 +0000 UTC m=+0.057664607 container create e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:33.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:33 compute-0 systemd[1]: Started libpod-conmon-e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56.scope.
Oct 10 09:48:33 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.260804455 +0000 UTC m=+0.036254594 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.358143895 +0000 UTC m=+0.133594024 container init e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.36893035 +0000 UTC m=+0.144380479 container start e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 tender_fermi[103354]: 472 0
Oct 10 09:48:33 compute-0 systemd[1]: libpod-e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56.scope: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.373653625 +0000 UTC m=+0.149103744 container attach e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.374138201 +0000 UTC m=+0.149588320 container died e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-45a7c6b8cc10948e47f32f9a998b91b0a477a8cf01ef1152b2638588e6b05c74-merged.mount: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103337]: 2025-10-10 09:48:33.426495552 +0000 UTC m=+0.201945701 container remove e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56 (image=quay.io/ceph/grafana:10.4.0, name=tender_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: libpod-conmon-e5c7105e757c93851024d849da47e0bcc3d3d0a44b278ed07a832d9a341bea56.scope: Deactivated successfully.
Oct 10 09:48:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 10 09:48:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 99 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 99 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=5 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 99 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=8 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 99 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=75/76 n=8 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:33 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Oct 10 09:48:33 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Oct 10 09:48:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=server t=2025-10-10T09:48:33.717214371Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct 10 09:48:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=tracing t=2025-10-10T09:48:33.717552263Z level=info msg="Closing tracing"
Oct 10 09:48:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=ticker t=2025-10-10T09:48:33.717655386Z level=info msg=stopped last_tick=2025-10-10T09:48:30Z
Oct 10 09:48:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=grafana-apiserver t=2025-10-10T09:48:33.717712098Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct 10 09:48:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[98662]: logger=sqlstore.transactions t=2025-10-10T09:48:33.729713093Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 10 09:48:33 compute-0 podman[103400]: 2025-10-10 09:48:33.748139208 +0000 UTC m=+0.065653269 container died 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6702f60835d419b95dde36cdb8baaa1f6a7b3f4824477c8ece6343448cba2d3-merged.mount: Deactivated successfully.
Oct 10 09:48:33 compute-0 podman[103400]: 2025-10-10 09:48:33.798362381 +0000 UTC m=+0.115876422 container remove 686053d68eed66946d009dfd2181cdf1226c5d766a5c8fd1a96d36b6eaba469d (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:33 compute-0 bash[103400]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0
Oct 10 09:48:33 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@grafana.compute-0.service: Deactivated successfully.
Oct 10 09:48:33 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:33 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@grafana.compute-0.service: Consumed 4.815s CPU time.
Oct 10 09:48:33 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:48:34 compute-0 sshd-session[103454]: Accepted publickey for zuul from 192.168.122.30 port 45494 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:48:34 compute-0 systemd-logind[806]: New session 40 of user zuul.
Oct 10 09:48:34 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 10 09:48:34 compute-0 sshd-session[103454]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 7.4 scrub starts
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 7.4 scrub ok
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 11.1e scrub starts
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 11.1e scrub ok
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 12.3 scrub starts
Oct 10 09:48:34 compute-0 ceph-mon[73551]: 12.3 scrub ok
Oct 10 09:48:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 09:48:34 compute-0 ceph-mon[73551]: osdmap e98: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-0 ceph-mon[73551]: osdmap e99: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:34.227Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000999467s
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:34 compute-0 podman[103530]: 2025-10-10 09:48:34.283334397 +0000 UTC m=+0.054870935 container create 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:34 compute-0 systemd[91328]: Starting Mark boot as successful...
Oct 10 09:48:34 compute-0 systemd[91328]: Finished Mark boot as successful.
Oct 10 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871780f34a888f15ea845810827fc643fdfc00b9f922bd1ceb3771da01597cdc/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871780f34a888f15ea845810827fc643fdfc00b9f922bd1ceb3771da01597cdc/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871780f34a888f15ea845810827fc643fdfc00b9f922bd1ceb3771da01597cdc/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871780f34a888f15ea845810827fc643fdfc00b9f922bd1ceb3771da01597cdc/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871780f34a888f15ea845810827fc643fdfc00b9f922bd1ceb3771da01597cdc/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:34 compute-0 podman[103530]: 2025-10-10 09:48:34.338522892 +0000 UTC m=+0.110059460 container init 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:34 compute-0 podman[103530]: 2025-10-10 09:48:34.346830895 +0000 UTC m=+0.118367433 container start 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:34 compute-0 bash[103530]: 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135
Oct 10 09:48:34 compute-0 podman[103530]: 2025-10-10 09:48:34.264462677 +0000 UTC m=+0.035999235 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 10 09:48:34 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:48:34 compute-0 sudo[103260]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: [prometheus INFO root] Restarting engine...
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:34] ENGINE Bus STOPPING
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:34] ENGINE Bus STOPPING
Oct 10 09:48:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 100 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=8 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:34 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 100 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=5 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[75,99)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.53978882Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-10T09:48:34Z
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540119881Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540134492Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540138462Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540142212Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540145772Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540149212Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540152822Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540156402Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540160142Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540163703Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540167023Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540170733Z level=info msg=Target target=[all]
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540176553Z level=info msg="Path Home" path=/usr/share/grafana
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540180623Z level=info msg="Path Data" path=/var/lib/grafana
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540184723Z level=info msg="Path Logs" path=/var/log/grafana
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540189183Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540193594Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=settings t=2025-10-10T09:48:34.540197954Z level=info msg="App mode production"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=sqlstore t=2025-10-10T09:48:34.540515973Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=sqlstore t=2025-10-10T09:48:34.540532654Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=migrator t=2025-10-10T09:48:34.541127234Z level=info msg="Starting DB migrations"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=migrator t=2025-10-10T09:48:34.558914399Z level=info msg="migrations completed" performed=0 skipped=547 duration=570.699µs
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=sqlstore t=2025-10-10T09:48:34.560035365Z level=info msg="Created default organization"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=secrets t=2025-10-10T09:48:34.560550542Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa82c001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:34 compute-0 sudo[103630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugin.store t=2025-10-10T09:48:34.585058468Z level=info msg="Loading plugins..."
Oct 10 09:48:34 compute-0 sudo[103630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:34 compute-0 sudo[103630]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:34 compute-0 sudo[103673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:48:34 compute-0 sudo[103673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:34 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.a scrub starts
Oct 10 09:48:34 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.a scrub ok
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=local.finder t=2025-10-10T09:48:34.657253093Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugin.store t=2025-10-10T09:48:34.657654786Z level=info msg="Plugins loaded" count=55 duration=72.598558ms
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=query_data t=2025-10-10T09:48:34.660353435Z level=info msg="Query Service initialization"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=live.push_http t=2025-10-10T09:48:34.663540939Z level=info msg="Live Push Gateway initialization"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.migration t=2025-10-10T09:48:34.667244471Z level=info msg=Starting
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.state.manager t=2025-10-10T09:48:34.678130329Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=infra.usagestats.collector t=2025-10-10T09:48:34.680597639Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=provisioning.datasources t=2025-10-10T09:48:34.682902656Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=provisioning.alerting t=2025-10-10T09:48:34.707280027Z level=info msg="starting to provision alerting"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=provisioning.alerting t=2025-10-10T09:48:34.707308268Z level=info msg="finished to provision alerting"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafanaStorageLogger t=2025-10-10T09:48:34.709215601Z level=info msg="Storage starting"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.state.manager t=2025-10-10T09:48:34.710365039Z level=info msg="Warming state cache for startup"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.multiorg.alertmanager t=2025-10-10T09:48:34.710639758Z level=info msg="Starting MultiOrg Alertmanager"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=http.server t=2025-10-10T09:48:34.711404922Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=http.server t=2025-10-10T09:48:34.712274631Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=provisioning.dashboard t=2025-10-10T09:48:34.776485623Z level=info msg="starting to provision dashboards"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.state.manager t=2025-10-10T09:48:34.778168739Z level=info msg="State cache has been initialized" states=0 duration=67.795459ms
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ngalert.scheduler t=2025-10-10T09:48:34.77823608Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=ticker t=2025-10-10T09:48:34.778347164Z level=info msg=starting first_tick=2025-10-10T09:48:40Z
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=provisioning.dashboard t=2025-10-10T09:48:34.791728444Z level=info msg="finished to provision dashboards"
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugins.update.checker t=2025-10-10T09:48:34.800229844Z level=info msg="Update check succeeded" duration=75.070199ms
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana.update.checker t=2025-10-10T09:48:34.834744978Z level=info msg="Update check succeeded" duration=109.693357ms
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:34 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:34 compute-0 python3.9[103741]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:34] ENGINE Bus STOPPED
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:34] ENGINE Bus STOPPED
Oct 10 09:48:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:34] ENGINE Bus STARTING
Oct 10 09:48:34 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:34] ENGINE Bus STARTING
Oct 10 09:48:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:34.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:35] ENGINE Serving on http://:::9283
Oct 10 09:48:35 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:35] ENGINE Serving on http://:::9283
Oct 10 09:48:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: [10/Oct/2025:09:48:35] ENGINE Bus STARTED
Oct 10 09:48:35 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:35] ENGINE Bus STARTED
Oct 10 09:48:35 compute-0 ceph-mgr[73845]: [prometheus INFO root] Engine started.
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 7.3 deep-scrub starts
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 7.3 deep-scrub ok
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 9.11 scrub starts
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 9.11 scrub ok
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 11.17 scrub starts
Oct 10 09:48:35 compute-0 ceph-mon[73551]: 11.17 scrub ok
Oct 10 09:48:35 compute-0 ceph-mon[73551]: pgmap v27: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-0 ceph-mon[73551]: osdmap e100: 3 total, 3 up, 3 in
Oct 10 09:48:35 compute-0 podman[103855]: 2025-10-10 09:48:35.216248653 +0000 UTC m=+0.080095194 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:48:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana-apiserver t=2025-10-10T09:48:35.245757553Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 10 09:48:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana-apiserver t=2025-10-10T09:48:35.246192907Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 10 09:48:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:35.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:35 compute-0 podman[103855]: 2025-10-10 09:48:35.316974445 +0000 UTC m=+0.180820906 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:48:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 10 09:48:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 10 09:48:35 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 10 09:48:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101 pruub=15.001284599s) [1] async=[1] r=-1 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 51'1091 active pruub 237.076232910s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101 pruub=15.001111984s) [1] r=-1 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 237.076232910s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101 pruub=15.000266075s) [1] async=[1] r=-1 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 51'1091 active pruub 237.076293945s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:35 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101 pruub=15.000122070s) [1] r=-1 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 237.076293945s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:35 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 10 09:48:35 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 10 09:48:35 compute-0 podman[104079]: 2025-10-10 09:48:35.909655483 +0000 UTC m=+0.072758103 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:35 compute-0 podman[104079]: 2025-10-10 09:48:35.920841201 +0000 UTC m=+0.083943811 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 12.a scrub starts
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 12.a scrub ok
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 11.1a scrub starts
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 11.1a scrub ok
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 9.13 scrub starts
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 9.13 scrub ok
Oct 10 09:48:36 compute-0 ceph-mon[73551]: osdmap e101: 3 total, 3 up, 3 in
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 9.12 scrub starts
Oct 10 09:48:36 compute-0 ceph-mon[73551]: 9.12 scrub ok
Oct 10 09:48:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:36 compute-0 python3.9[104146]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:36 compute-0 podman[104213]: 2025-10-10 09:48:36.279181624 +0000 UTC m=+0.067338125 container exec 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:48:36 compute-0 podman[104213]: 2025-10-10 09:48:36.291673255 +0000 UTC m=+0.079829756 container exec_died 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 10 09:48:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 10 09:48:36 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 10 09:48:36 compute-0 podman[104279]: 2025-10-10 09:48:36.500669307 +0000 UTC m=+0.054966258 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:48:36 compute-0 podman[104279]: 2025-10-10 09:48:36.51565271 +0000 UTC m=+0.069949631 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:48:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:36 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 10 09:48:36 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 10 09:48:36 compute-0 podman[104370]: 2025-10-10 09:48:36.780027963 +0000 UTC m=+0.074045326 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Oct 10 09:48:36 compute-0 podman[104370]: 2025-10-10 09:48:36.793204467 +0000 UTC m=+0.087221780 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, architecture=x86_64, distribution-scope=public)
Oct 10 09:48:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:36 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 09:48:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 09:48:37 compute-0 podman[104484]: 2025-10-10 09:48:37.067474355 +0000 UTC m=+0.062349681 container exec e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:37 compute-0 podman[104484]: 2025-10-10 09:48:37.129733543 +0000 UTC m=+0.124608769 container exec_died e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:37 compute-0 sudo[104627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaeuvixxjnelcaotetubleuusgtorjxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089716.8458676-93-47956367151157/AnsiballZ_command.py'
Oct 10 09:48:37 compute-0 sudo[104627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:37 compute-0 podman[104632]: 2025-10-10 09:48:37.382252765 +0000 UTC m=+0.072833946 container exec 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:37] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 10 09:48:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:37] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Oct 10 09:48:37 compute-0 python3.9[104634]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 7.2 scrub starts
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 7.2 scrub ok
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 12.9 deep-scrub starts
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 12.9 deep-scrub ok
Oct 10 09:48:37 compute-0 ceph-mon[73551]: pgmap v30: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:37 compute-0 ceph-mon[73551]: osdmap e102: 3 total, 3 up, 3 in
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 11.14 scrub starts
Oct 10 09:48:37 compute-0 ceph-mon[73551]: 11.14 scrub ok
Oct 10 09:48:37 compute-0 sudo[104627]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:37 compute-0 podman[104632]: 2025-10-10 09:48:37.587701331 +0000 UTC m=+0.278282472 container exec_died 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:48:37 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct 10 09:48:37 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct 10 09:48:38 compute-0 podman[104770]: 2025-10-10 09:48:38.023638835 +0000 UTC m=+0.055814406 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:38 compute-0 podman[104770]: 2025-10-10 09:48:38.070716873 +0000 UTC m=+0.102892404 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:38 compute-0 sudo[103673]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:48:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 4 objects/s recovering
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:38 compute-0 sudo[104865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:38 compute-0 sudo[104865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:38 compute-0 sudo[104865]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:38 compute-0 sudo[104913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:48:38 compute-0 sudo[104913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:38 compute-0 sudo[104988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewzscxalcexcwfccykvswtftprrcplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089717.9946568-129-198317100575114/AnsiballZ_stat.py'
Oct 10 09:48:38 compute-0 sudo[104988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 7.6 scrub starts
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 7.6 scrub ok
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 11.e scrub starts
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 11.e scrub ok
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 12.8 scrub starts
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 12.8 scrub ok
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 9.a scrub starts
Oct 10 09:48:38 compute-0 ceph-mon[73551]: 9.a scrub ok
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8100032f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:38 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct 10 09:48:38 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct 10 09:48:38 compute-0 python3.9[104990]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:48:38 compute-0 sudo[104988]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:38 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:38 compute-0 podman[105051]: 2025-10-10 09:48:38.882983762 +0000 UTC m=+0.051390150 container create 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:48:38 compute-0 systemd[1]: Started libpod-conmon-3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb.scope.
Oct 10 09:48:38 compute-0 podman[105051]: 2025-10-10 09:48:38.860161042 +0000 UTC m=+0.028567420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:38.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:38 compute-0 podman[105051]: 2025-10-10 09:48:38.988902305 +0000 UTC m=+0.157308763 container init 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:39 compute-0 podman[105051]: 2025-10-10 09:48:39.000947961 +0000 UTC m=+0.169354379 container start 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:48:39 compute-0 podman[105051]: 2025-10-10 09:48:39.006836495 +0000 UTC m=+0.175242893 container attach 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:48:39 compute-0 eager_kirch[105072]: 167 167
Oct 10 09:48:39 compute-0 systemd[1]: libpod-3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb.scope: Deactivated successfully.
Oct 10 09:48:39 compute-0 podman[105051]: 2025-10-10 09:48:39.01368148 +0000 UTC m=+0.182087878 container died 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b2c0e321fc10ffcf50731b6a141ca55eb8a99b1cf5189cf9772e7e7defa124-merged.mount: Deactivated successfully.
Oct 10 09:48:39 compute-0 podman[105051]: 2025-10-10 09:48:39.06962468 +0000 UTC m=+0.238031058 container remove 3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:48:39 compute-0 systemd[1]: libpod-conmon-3a98ba0a7ca0cdef46b321f3ea1e9d62ef61e95625b6f2e9c11fdf102f1286fb.scope: Deactivated successfully.
Oct 10 09:48:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 10 09:48:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 09:48:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 10 09:48:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.282870261 +0000 UTC m=+0.047802162 container create 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 09:48:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:39 compute-0 systemd[1]: Started libpod-conmon-7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175.scope.
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.26034518 +0000 UTC m=+0.025277121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.402897759 +0000 UTC m=+0.167829690 container init 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.420699903 +0000 UTC m=+0.185631804 container start 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.425282365 +0000 UTC m=+0.190214276 container attach 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 11.3 scrub starts
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 11.3 scrub ok
Oct 10 09:48:39 compute-0 ceph-mon[73551]: pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 4 objects/s recovering
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 12.b scrub starts
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 12.b scrub ok
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 9.f scrub starts
Oct 10 09:48:39 compute-0 ceph-mon[73551]: 9.f scrub ok
Oct 10 09:48:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 09:48:39 compute-0 ceph-mon[73551]: osdmap e103: 3 total, 3 up, 3 in
Oct 10 09:48:39 compute-0 sudo[105244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxmwbxugmyhlbgxvbhznshcyyqvnbeso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089719.1387467-162-118091634312866/AnsiballZ_file.py'
Oct 10 09:48:39 compute-0 sudo[105244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:39 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct 10 09:48:39 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct 10 09:48:39 compute-0 nostalgic_allen[105166]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:48:39 compute-0 nostalgic_allen[105166]: --> All data devices are unavailable
Oct 10 09:48:39 compute-0 systemd[1]: libpod-7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175.scope: Deactivated successfully.
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.831741649 +0000 UTC m=+0.596673560 container died 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:48:39 compute-0 python3.9[105246]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a4f51efd2933fa492b24067a9b7440e1697ef7f17861cbe63c947dc38738545-merged.mount: Deactivated successfully.
Oct 10 09:48:39 compute-0 sudo[105244]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:39 compute-0 podman[105148]: 2025-10-10 09:48:39.888778395 +0000 UTC m=+0.653710276 container remove 7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_allen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:48:39 compute-0 systemd[1]: libpod-conmon-7d63841e4764320ecb732531f55c121da4d1b0372004577c524af8d10abe3175.scope: Deactivated successfully.
Oct 10 09:48:39 compute-0 sudo[104913]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:40 compute-0 sudo[105291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:40 compute-0 sudo[105291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:40 compute-0 sudo[105291]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:40 compute-0 sudo[105325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:48:40 compute-0 sudo[105325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Oct 10 09:48:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 09:48:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 10 09:48:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 09:48:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 10 09:48:40 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 8.d scrub starts
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 8.d scrub ok
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 12.6 scrub starts
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 12.6 scrub ok
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 9.6 scrub starts
Oct 10 09:48:40 compute-0 ceph-mon[73551]: 9.6 scrub ok
Oct 10 09:48:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 09:48:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa820002390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.612631226 +0000 UTC m=+0.049177647 container create 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:40 compute-0 systemd[1]: Started libpod-conmon-86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a.scope.
Oct 10 09:48:40 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct 10 09:48:40 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.590518409 +0000 UTC m=+0.027064830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.702098189 +0000 UTC m=+0.138644610 container init 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.712656426 +0000 UTC m=+0.149202827 container start 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:48:40 compute-0 sad_ganguly[105534]: 167 167
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.716797042 +0000 UTC m=+0.153343433 container attach 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:48:40 compute-0 systemd[1]: libpod-86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a.scope: Deactivated successfully.
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.718520329 +0000 UTC m=+0.155066760 container died 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:40 compute-0 python3.9[105512]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-50321409cc2ad22de3e6b48002ab6b3c3af1162066b3e347c8399b54e8d0aa77-merged.mount: Deactivated successfully.
Oct 10 09:48:40 compute-0 podman[105517]: 2025-10-10 09:48:40.762638569 +0000 UTC m=+0.199184970 container remove 86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:48:40 compute-0 systemd[1]: libpod-conmon-86cc18657e6a8b7e4fe05a7b8038d79af8f1e378ee583100c46a7e46f1ca7c0a.scope: Deactivated successfully.
Oct 10 09:48:40 compute-0 network[105569]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:48:40 compute-0 network[105570]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:48:40 compute-0 network[105571]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:48:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:40 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:40 compute-0 podman[105582]: 2025-10-10 09:48:40.940670393 +0000 UTC m=+0.043845372 container create 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:41 compute-0 podman[105582]: 2025-10-10 09:48:40.92291272 +0000 UTC m=+0.026087709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:41 compute-0 systemd[1]: Started libpod-conmon-83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991.scope.
Oct 10 09:48:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 10 09:48:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 8.16 scrub starts
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 8.16 scrub ok
Oct 10 09:48:41 compute-0 ceph-mon[73551]: pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 09:48:41 compute-0 ceph-mon[73551]: osdmap e104: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 12.c scrub starts
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 12.c scrub ok
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 8.1b scrub starts
Oct 10 09:48:41 compute-0 ceph-mon[73551]: 8.1b scrub ok
Oct 10 09:48:41 compute-0 ceph-mon[73551]: osdmap e105: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b0465ef223ee43099d371f31a72eb0d7f05205427ce6e72c398edbc787f0c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b0465ef223ee43099d371f31a72eb0d7f05205427ce6e72c398edbc787f0c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b0465ef223ee43099d371f31a72eb0d7f05205427ce6e72c398edbc787f0c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b0465ef223ee43099d371f31a72eb0d7f05205427ce6e72c398edbc787f0c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:41 compute-0 podman[105582]: 2025-10-10 09:48:41.611570304 +0000 UTC m=+0.714745303 container init 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:41 compute-0 podman[105582]: 2025-10-10 09:48:41.622374979 +0000 UTC m=+0.725549948 container start 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct 10 09:48:41 compute-0 podman[105582]: 2025-10-10 09:48:41.626770203 +0000 UTC m=+0.729945202 container attach 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:48:41 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 10 09:48:41 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 10 09:48:41 compute-0 vigilant_raman[105600]: {
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:     "0": [
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:         {
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "devices": [
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "/dev/loop3"
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             ],
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "lv_name": "ceph_lv0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "lv_size": "21470642176",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "name": "ceph_lv0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "tags": {
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.cluster_name": "ceph",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.crush_device_class": "",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.encrypted": "0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.osd_id": "0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.type": "block",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.vdo": "0",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:                 "ceph.with_tpm": "0"
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             },
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "type": "block",
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:             "vg_name": "ceph_vg0"
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:         }
Oct 10 09:48:41 compute-0 vigilant_raman[105600]:     ]
Oct 10 09:48:41 compute-0 vigilant_raman[105600]: }
Oct 10 09:48:41 compute-0 podman[105582]: 2025-10-10 09:48:41.955628157 +0000 UTC m=+1.058803126 container died 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:41 compute-0 systemd[1]: libpod-83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991.scope: Deactivated successfully.
Oct 10 09:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3b0465ef223ee43099d371f31a72eb0d7f05205427ce6e72c398edbc787f0c4-merged.mount: Deactivated successfully.
Oct 10 09:48:42 compute-0 podman[105582]: 2025-10-10 09:48:42.011438642 +0000 UTC m=+1.114613621 container remove 83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_raman, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 09:48:42 compute-0 systemd[1]: libpod-conmon-83e2eef3a1a1107103ef6dcc6d01e820a20d88263a797f3b4198996a07ef2991.scope: Deactivated successfully.
Oct 10 09:48:42 compute-0 sudo[105325]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:42 compute-0 sudo[105649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:42 compute-0 sudo[105649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:42 compute-0 sudo[105649]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Oct 10 09:48:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 09:48:42 compute-0 sudo[105677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:48:42 compute-0 sudo[105677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:48:42.229Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002772651s
Oct 10 09:48:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 10 09:48:42 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 09:48:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 10 09:48:42 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 10 09:48:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 12.18 scrub starts
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 12.18 scrub ok
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 7.9 scrub starts
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 7.9 scrub ok
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 8.18 scrub starts
Oct 10 09:48:42 compute-0 ceph-mon[73551]: 8.18 scrub ok
Oct 10 09:48:42 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 09:48:42 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 09:48:42 compute-0 ceph-mon[73551]: osdmap e106: 3 total, 3 up, 3 in
Oct 10 09:48:42 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 10 09:48:42 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.675839249 +0000 UTC m=+0.043795181 container create bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:48:42 compute-0 systemd[1]: Started libpod-conmon-bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9.scope.
Oct 10 09:48:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.658298373 +0000 UTC m=+0.026254325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.765725234 +0000 UTC m=+0.133681186 container init bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.7734828 +0000 UTC m=+0.141438732 container start bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:42 compute-0 awesome_spence[105785]: 167 167
Oct 10 09:48:42 compute-0 systemd[1]: libpod-bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9.scope: Deactivated successfully.
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.777383158 +0000 UTC m=+0.145339110 container attach bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.781556575 +0000 UTC m=+0.149512527 container died bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c75146fdfb80616c3f350df8fa2c16bc11d60aca353425dcb1916f9c80a5234-merged.mount: Deactivated successfully.
Oct 10 09:48:42 compute-0 podman[105765]: 2025-10-10 09:48:42.824171407 +0000 UTC m=+0.192127339 container remove bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:42 compute-0 systemd[1]: libpod-conmon-bc086dfa274ad020d2c086537825670acf0fdd009e8585a3b5347370763177f9.scope: Deactivated successfully.
Oct 10 09:48:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:42 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.023709967 +0000 UTC m=+0.055576138 container create 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 09:48:43 compute-0 systemd[1]: Started libpod-conmon-8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7.scope.
Oct 10 09:48:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fc888ca05d345ab9440d41112d845be8d3b0a16c20b4a149d964f1a20ca9a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fc888ca05d345ab9440d41112d845be8d3b0a16c20b4a149d964f1a20ca9a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fc888ca05d345ab9440d41112d845be8d3b0a16c20b4a149d964f1a20ca9a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fc888ca05d345ab9440d41112d845be8d3b0a16c20b4a149d964f1a20ca9a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.004960061 +0000 UTC m=+0.036826252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.113612574 +0000 UTC m=+0.145478765 container init 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.125281288 +0000 UTC m=+0.157147459 container start 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.129534817 +0000 UTC m=+0.161401018 container attach 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:48:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:43.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 10 09:48:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 10 09:48:43 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 10.4 scrub starts
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 10.4 scrub ok
Oct 10 09:48:43 compute-0 ceph-mon[73551]: pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 7.e scrub starts
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 7.e scrub ok
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 10.16 scrub starts
Oct 10 09:48:43 compute-0 ceph-mon[73551]: 10.16 scrub ok
Oct 10 09:48:43 compute-0 ceph-mon[73551]: osdmap e107: 3 total, 3 up, 3 in
Oct 10 09:48:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Oct 10 09:48:43 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Oct 10 09:48:43 compute-0 lvm[105953]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:48:43 compute-0 lvm[105953]: VG ceph_vg0 finished
Oct 10 09:48:43 compute-0 boring_lederberg[105843]: {}
Oct 10 09:48:43 compute-0 systemd[1]: libpod-8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7.scope: Deactivated successfully.
Oct 10 09:48:43 compute-0 systemd[1]: libpod-8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7.scope: Consumed 1.195s CPU time.
Oct 10 09:48:43 compute-0 podman[105821]: 2025-10-10 09:48:43.945761457 +0000 UTC m=+0.977627638 container died 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-86fc888ca05d345ab9440d41112d845be8d3b0a16c20b4a149d964f1a20ca9a9-merged.mount: Deactivated successfully.
Oct 10 09:48:44 compute-0 podman[105821]: 2025-10-10 09:48:44.004671014 +0000 UTC m=+1.036537185 container remove 8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:48:44 compute-0 systemd[1]: libpod-conmon-8c5f44a1e2356c27a0500a974db2badbcc06db8399d62b269f8aba2b386b60d7.scope: Deactivated successfully.
Oct 10 09:48:44 compute-0 sudo[105677]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:48:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:48:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-0 sudo[105993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:48:44 compute-0 sudo[105993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:44 compute-0 sudo[105993]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 10 09:48:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 10 09:48:44 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 10 09:48:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003cd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 10.13 scrub starts
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 10.13 scrub ok
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 7.1e deep-scrub starts
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 7.1e deep-scrub ok
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 10.e scrub starts
Oct 10 09:48:44 compute-0 ceph-mon[73551]: 10.e scrub ok
Oct 10 09:48:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-0 ceph-mon[73551]: osdmap e108: 3 total, 3 up, 3 in
Oct 10 09:48:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct 10 09:48:44 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct 10 09:48:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:44 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003cd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 10 09:48:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 10 09:48:45 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 10.14 scrub starts
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 10.14 scrub ok
Oct 10 09:48:45 compute-0 ceph-mon[73551]: pgmap v40: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 12.10 scrub starts
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 12.10 scrub ok
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 10.11 scrub starts
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 10.c scrub starts
Oct 10 09:48:45 compute-0 ceph-mon[73551]: 10.c scrub ok
Oct 10 09:48:45 compute-0 ceph-mon[73551]: osdmap e109: 3 total, 3 up, 3 in
Oct 10 09:48:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.e scrub starts
Oct 10 09:48:45 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.e scrub ok
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:48:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:48:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:48:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 10 09:48:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 10 09:48:46 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 10 09:48:46 compute-0 python3.9[106145]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:48:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 10.11 scrub ok
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 12.e scrub starts
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 12.e scrub ok
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 10.3 scrub starts
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 10.a deep-scrub starts
Oct 10 09:48:46 compute-0 ceph-mon[73551]: 10.a deep-scrub ok
Oct 10 09:48:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:46 compute-0 ceph-mon[73551]: osdmap e110: 3 total, 3 up, 3 in
Oct 10 09:48:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 10 09:48:46 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 10 09:48:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:46 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:47 compute-0 python3.9[106296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:47] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 09:48:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:47] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 10.3 scrub ok
Oct 10 09:48:47 compute-0 ceph-mon[73551]: pgmap v43: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 7.1b scrub starts
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 7.1b scrub ok
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 10.f scrub starts
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 10.f scrub ok
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 10.9 scrub starts
Oct 10 09:48:47 compute-0 ceph-mon[73551]: 10.9 scrub ok
Oct 10 09:48:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct 10 09:48:47 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct 10 09:48:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Oct 10 09:48:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 09:48:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003cf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8380027d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 10 09:48:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 09:48:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 10 09:48:48 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 10 09:48:48 compute-0 ceph-mon[73551]: 12.12 scrub starts
Oct 10 09:48:48 compute-0 ceph-mon[73551]: 12.12 scrub ok
Oct 10 09:48:48 compute-0 ceph-mon[73551]: 10.b scrub starts
Oct 10 09:48:48 compute-0 ceph-mon[73551]: 10.b scrub ok
Oct 10 09:48:48 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 09:48:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 10 09:48:48 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 10 09:48:48 compute-0 python3.9[106451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:48 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:49.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:49 compute-0 ceph-mon[73551]: pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 09:48:49 compute-0 ceph-mon[73551]: osdmap e111: 3 total, 3 up, 3 in
Oct 10 09:48:49 compute-0 ceph-mon[73551]: 7.10 scrub starts
Oct 10 09:48:49 compute-0 ceph-mon[73551]: 7.10 scrub ok
Oct 10 09:48:49 compute-0 ceph-mon[73551]: 10.6 scrub starts
Oct 10 09:48:49 compute-0 ceph-mon[73551]: 10.6 scrub ok
Oct 10 09:48:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Oct 10 09:48:49 compute-0 sudo[106608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjgtorgxlvahlnkrqnhiwebhrkthrewr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089729.3784435-306-86918051772506/AnsiballZ_setup.py'
Oct 10 09:48:49 compute-0 sudo[106608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:49 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Oct 10 09:48:50 compute-0 python3.9[106610]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:48:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Oct 10 09:48:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 09:48:50 compute-0 sudo[106616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:48:50 compute-0 sudo[106616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:50 compute-0 sudo[106616]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:50 compute-0 sudo[106608]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 10 09:48:50 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 10 09:48:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 10 09:48:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 09:48:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 10 09:48:50 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 10 09:48:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 112 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=4 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=13.050806046s) [2] r=-1 lpr=112 pi=[66,112)/1 crt=51'1091 mlcod 0'0 active pruub 250.344696045s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:50 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 112 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=4 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=13.050753593s) [2] r=-1 lpr=112 pi=[66,112)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 250.344696045s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:50 compute-0 ceph-mon[73551]: 12.19 scrub starts
Oct 10 09:48:50 compute-0 ceph-mon[73551]: 12.19 scrub ok
Oct 10 09:48:50 compute-0 ceph-mon[73551]: 10.19 scrub starts
Oct 10 09:48:50 compute-0 ceph-mon[73551]: 10.19 scrub ok
Oct 10 09:48:50 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 09:48:50 compute-0 sudo[106719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwwmuumelxvxbbyactwrdzpgwsomgudu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089729.3784435-306-86918051772506/AnsiballZ_dnf.py'
Oct 10 09:48:50 compute-0 sudo[106719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:50 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8380030f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:51 compute-0 python3.9[106721]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:48:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 09:48:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 09:48:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Oct 10 09:48:51 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Oct 10 09:48:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 10 09:48:51 compute-0 ceph-mon[73551]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:51 compute-0 ceph-mon[73551]: 7.18 scrub starts
Oct 10 09:48:51 compute-0 ceph-mon[73551]: 7.18 scrub ok
Oct 10 09:48:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 09:48:51 compute-0 ceph-mon[73551]: osdmap e112: 3 total, 3 up, 3 in
Oct 10 09:48:51 compute-0 ceph-mon[73551]: 10.1a scrub starts
Oct 10 09:48:51 compute-0 ceph-mon[73551]: 10.1a scrub ok
Oct 10 09:48:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 10 09:48:51 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 10 09:48:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 113 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=4 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] r=0 lpr=113 pi=[66,113)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:51 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 113 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=66/67 n=4 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] r=0 lpr=113 pi=[66,113)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Oct 10 09:48:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 09:48:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Oct 10 09:48:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:52 compute-0 ceph-osd[81941]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Oct 10 09:48:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 10 09:48:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 09:48:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 10 09:48:52 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 10 09:48:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 114 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:52 compute-0 ceph-mon[73551]: 10.2 deep-scrub starts
Oct 10 09:48:52 compute-0 ceph-mon[73551]: 10.2 deep-scrub ok
Oct 10 09:48:52 compute-0 ceph-mon[73551]: osdmap e113: 3 total, 3 up, 3 in
Oct 10 09:48:52 compute-0 ceph-mon[73551]: 10.1c scrub starts
Oct 10 09:48:52 compute-0 ceph-mon[73551]: 10.1c scrub ok
Oct 10 09:48:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 09:48:52 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 114 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=113/114 n=4 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] async=[2] r=0 lpr=113 pi=[66,113)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:52 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 10 09:48:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 115 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[65,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:53 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 115 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=113/114 n=4 ec=56/45 lis/c=113/66 les/c/f=114/67/0 sis=115 pruub=15.295756340s) [2] async=[2] r=-1 lpr=115 pi=[66,115)/1 crt=51'1091 mlcod 51'1091 active pruub 255.361648560s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:53 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 115 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[65,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:53 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 115 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=113/114 n=4 ec=56/45 lis/c=113/66 les/c/f=114/67/0 sis=115 pruub=15.295657158s) [2] r=-1 lpr=115 pi=[66,115)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 255.361648560s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:53 compute-0 ceph-mon[73551]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:53 compute-0 ceph-mon[73551]: 10.5 deep-scrub starts
Oct 10 09:48:53 compute-0 ceph-mon[73551]: 10.5 deep-scrub ok
Oct 10 09:48:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 09:48:53 compute-0 ceph-mon[73551]: osdmap e114: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-0 ceph-mon[73551]: 10.1d scrub starts
Oct 10 09:48:53 compute-0 ceph-mon[73551]: 10.1d scrub ok
Oct 10 09:48:53 compute-0 ceph-mon[73551]: osdmap e115: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8380030f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 10 09:48:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:54 compute-0 ceph-mon[73551]: 10.1e scrub starts
Oct 10 09:48:54 compute-0 ceph-mon[73551]: 10.1e scrub ok
Oct 10 09:48:54 compute-0 ceph-mon[73551]: osdmap e116: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:54 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 10 09:48:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 10 09:48:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 10 09:48:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 117 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=115/65 les/c/f=116/66/0 sis=117) [0] r=0 lpr=117 pi=[65,117)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:55 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 117 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=115/65 les/c/f=116/66/0 sis=117) [0] r=0 lpr=117 pi=[65,117)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:55 compute-0 ceph-mon[73551]: pgmap v53: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:55 compute-0 ceph-mon[73551]: osdmap e117: 3 total, 3 up, 3 in
Oct 10 09:48:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 10 09:48:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 10 09:48:56 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 10 09:48:56 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 118 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=117/118 n=5 ec=56/45 lis/c=115/65 les/c/f=116/66/0 sis=117) [0] r=0 lpr=117 pi=[65,117)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8380030f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:56 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:48:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:48:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:57] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 09:48:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:48:57] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 09:48:57 compute-0 ceph-mon[73551]: pgmap v56: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:57 compute-0 ceph-mon[73551]: osdmap e118: 3 total, 3 up, 3 in
Oct 10 09:48:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Oct 10 09:48:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 09:48:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 10 09:48:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 09:48:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 09:48:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 10 09:48:58 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 10 09:48:58 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 119 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:48:58 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:48:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 10 09:48:59 compute-0 ceph-mon[73551]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:59 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 09:48:59 compute-0 ceph-mon[73551]: osdmap e119: 3 total, 3 up, 3 in
Oct 10 09:48:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 10 09:48:59 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 10 09:48:59 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 120 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[72,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:59 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 120 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[72,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Oct 10 09:49:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 09:49:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 10 09:49:00 compute-0 ceph-mon[73551]: osdmap e120: 3 total, 3 up, 3 in
Oct 10 09:49:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 09:49:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 09:49:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 10 09:49:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 10 09:49:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:00 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:01.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:49:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 10 09:49:01 compute-0 ceph-mon[73551]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 09:49:01 compute-0 ceph-mon[73551]: osdmap e121: 3 total, 3 up, 3 in
Oct 10 09:49:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 10 09:49:01 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 10 09:49:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 122 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=120/72 les/c/f=121/73/0 sis=122) [0] r=0 lpr=122 pi=[72,122)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:01 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 122 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=120/72 les/c/f=121/73/0 sis=122) [0] r=0 lpr=122 pi=[72,122)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Oct 10 09:49:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 09:49:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 10 09:49:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 09:49:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 10 09:49:02 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 10 09:49:02 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 123 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=122/123 n=5 ec=56/45 lis/c=120/72 les/c/f=121/73/0 sis=122) [0] r=0 lpr=122 pi=[72,122)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:02 compute-0 ceph-mon[73551]: osdmap e122: 3 total, 3 up, 3 in
Oct 10 09:49:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 09:49:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:02 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:03.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:03 compute-0 ceph-mon[73551]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 09:49:03 compute-0 ceph-mon[73551]: osdmap e123: 3 total, 3 up, 3 in
Oct 10 09:49:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 09:49:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003d90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:04 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:05 compute-0 ceph-mon[73551]: pgmap v66: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 09:49:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 10 09:49:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:06 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:07] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:07] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:07 compute-0 ceph-mon[73551]: pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 10 09:49:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 403 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Oct 10 09:49:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 09:49:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa810003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa8200044a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 10 09:49:08 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 09:49:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 09:49:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 10 09:49:08 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 10 09:49:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:08 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa814003dd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:09 compute-0 ceph-mon[73551]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 403 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:09 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 09:49:09 compute-0 ceph-mon[73551]: osdmap e124: 3 total, 3 up, 3 in
Oct 10 09:49:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Oct 10 09:49:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 09:49:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:10 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:10 compute-0 sudo[106860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:10 compute-0 sudo[106860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:10 compute-0 sudo[106860]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:10 compute-0 kernel: ganesha.nfsd[105398]: segfault at 50 ip 00007fa8eb2ff32e sp 00007fa8ba7fb210 error 4 in libntirpc.so.5.8[7fa8eb2e4000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 10 09:49:10 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:49:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[96585]: 10/10/2025 09:49:10 : epoch 68e8d621 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa838004440 fd 47 proxy ignored for local
Oct 10 09:49:10 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 10 09:49:10 compute-0 systemd[1]: Started Process Core Dump (PID 106885/UID 0).
Oct 10 09:49:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 10 09:49:10 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 09:49:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 09:49:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 10 09:49:10 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 10 09:49:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:11.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:11 compute-0 systemd-coredump[106886]: Process 96589 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007fa8eb2ff32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 09:49:11 compute-0 systemd[1]: systemd-coredump@0-106885-0.service: Deactivated successfully.
Oct 10 09:49:11 compute-0 systemd[1]: systemd-coredump@0-106885-0.service: Consumed 1.050s CPU time.
Oct 10 09:49:11 compute-0 podman[106892]: 2025-10-10 09:49:11.824746347 +0000 UTC m=+0.037308478 container died 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff14c4a66091b425cc2098d0baaa5439d8861a2301a4264f534f660f4c730988-merged.mount: Deactivated successfully.
Oct 10 09:49:11 compute-0 podman[106892]: 2025-10-10 09:49:11.877966397 +0000 UTC m=+0.090528478 container remove 4b9bc19fc9402caeaf00471e27304182b7da502d4062b8d29f0500893500cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:49:11 compute-0 ceph-mon[73551]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 09:49:11 compute-0 ceph-mon[73551]: osdmap e125: 3 total, 3 up, 3 in
Oct 10 09:49:11 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:49:12 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 09:49:12 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.734s CPU time.
Oct 10 09:49:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:49:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Oct 10 09:49:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 09:49:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 10 09:49:12 compute-0 ceph-mon[73551]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:49:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 09:49:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 09:49:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 10 09:49:12 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 10 09:49:12 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 126 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=126) [0] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:13.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:13.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 10 09:49:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 10 09:49:13 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 10 09:49:13 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 127 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:13 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 127 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 09:49:13 compute-0 ceph-mon[73551]: osdmap e126: 3 total, 3 up, 3 in
Oct 10 09:49:13 compute-0 ceph-mon[73551]: osdmap e127: 3 total, 3 up, 3 in
Oct 10 09:49:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Oct 10 09:49:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 09:49:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 10 09:49:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 09:49:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 10 09:49:14 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 10 09:49:14 compute-0 ceph-mon[73551]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 09:49:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 09:49:14 compute-0 ceph-mon[73551]: osdmap e128: 3 total, 3 up, 3 in
Oct 10 09:49:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 10 09:49:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 10 09:49:15 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 10 09:49:15 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129) [0] r=0 lpr=129 pi=[90,129)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:15 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129) [0] r=0 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:49:16
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.nfs', 'default.rgw.log', 'volumes', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Oct 10 09:49:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:49:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:49:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:49:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:49:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 10 09:49:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 09:49:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 10 09:49:16 compute-0 ceph-mon[73551]: osdmap e129: 3 total, 3 up, 3 in
Oct 10 09:49:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 09:49:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 10 09:49:16 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 130 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=129/130 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129) [0] r=0 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/094916 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:49:16 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 130 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=130) [0] r=0 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:17 compute-0 ceph-mgr[73845]: [dashboard INFO request] [192.168.122.100:38348] [POST] [200] [0.157s] [4.0B] [c3ac4531-bcfb-4b92-8130-0d5704c08f1c] /api/prometheus_receiver
Oct 10 09:49:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:17.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:17] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:49:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:17] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:49:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 10 09:49:17 compute-0 ceph-mon[73551]: pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 09:49:17 compute-0 ceph-mon[73551]: osdmap e130: 3 total, 3 up, 3 in
Oct 10 09:49:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 10 09:49:17 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 10 09:49:17 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 131 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[97,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:17 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 131 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[97,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 10 09:49:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 10 09:49:18 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 10 09:49:18 compute-0 ceph-mon[73551]: osdmap e131: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:19.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 10 09:49:19 compute-0 ceph-mon[73551]: pgmap v81: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:19 compute-0 ceph-mon[73551]: osdmap e132: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133) [0] r=0 lpr=133 pi=[97,133)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:19 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133) [0] r=0 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 10 09:49:20 compute-0 ceph-mon[73551]: osdmap e133: 3 total, 3 up, 3 in
Oct 10 09:49:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 10 09:49:20 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 10 09:49:20 compute-0 ceph-osd[81941]: osd.0 pg_epoch: 134 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=133/134 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133) [0] r=0 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:21.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:21 compute-0 ceph-mon[73551]: pgmap v84: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:21 compute-0 ceph-mon[73551]: osdmap e134: 3 total, 3 up, 3 in
Oct 10 09:49:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/094921 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:49:22 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 1.
Oct 10 09:49:22 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:49:22 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.734s CPU time.
Oct 10 09:49:22 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:49:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:22 compute-0 podman[107021]: 2025-10-10 09:49:22.393820745 +0000 UTC m=+0.057510185 container create 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081d1b79c5b3f6683d6ebd6edb5d462bad6fade4c25324854f35c1efe9aacfc0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081d1b79c5b3f6683d6ebd6edb5d462bad6fade4c25324854f35c1efe9aacfc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081d1b79c5b3f6683d6ebd6edb5d462bad6fade4c25324854f35c1efe9aacfc0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081d1b79c5b3f6683d6ebd6edb5d462bad6fade4c25324854f35c1efe9aacfc0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:22 compute-0 podman[107021]: 2025-10-10 09:49:22.451543885 +0000 UTC m=+0.115233345 container init 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:49:22 compute-0 podman[107021]: 2025-10-10 09:49:22.368739749 +0000 UTC m=+0.032429219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:22 compute-0 podman[107021]: 2025-10-10 09:49:22.462731263 +0000 UTC m=+0.126420693 container start 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:22 compute-0 bash[107021]: 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00
Oct 10 09:49:22 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:49:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:49:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:23 compute-0 ceph-mon[73551]: pgmap v86: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Oct 10 09:49:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Oct 10 09:49:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 09:49:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 10 09:49:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 09:49:24 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 09:49:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 10 09:49:24 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 10 09:49:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:25 compute-0 ceph-mon[73551]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Oct 10 09:49:25 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 09:49:25 compute-0 ceph-mon[73551]: osdmap e135: 3 total, 3 up, 3 in
Oct 10 09:49:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Oct 10 09:49:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Oct 10 09:49:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 09:49:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 10 09:49:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 09:49:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 09:49:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 10 09:49:26 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 10 09:49:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:49:26.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:49:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:27] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:49:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:27] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:49:27 compute-0 ceph-mon[73551]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Oct 10 09:49:27 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 09:49:27 compute-0 ceph-mon[73551]: osdmap e136: 3 total, 3 up, 3 in
Oct 10 09:49:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 674 B/s wr, 1 op/s; 28 B/s, 0 objects/s recovering
Oct 10 09:49:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Oct 10 09:49:28 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 09:49:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:49:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:49:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 10 09:49:28 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 09:49:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 10 09:49:28 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 10 09:49:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 09:49:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:29.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 10 09:49:29 compute-0 ceph-mon[73551]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 674 B/s wr, 1 op/s; 28 B/s, 0 objects/s recovering
Oct 10 09:49:29 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 09:49:29 compute-0 ceph-mon[73551]: osdmap e137: 3 total, 3 up, 3 in
Oct 10 09:49:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 10 09:49:29 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 925 B/s wr, 2 op/s
Oct 10 09:49:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 10 09:49:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:49:30 compute-0 sudo[107086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:30 compute-0 sudo[107086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:30 compute-0 sudo[107086]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 10 09:49:30 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:49:30 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-0 ceph-mon[73551]: osdmap e138: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:49:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:49:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:49:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:31.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:49:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 10 09:49:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 10 09:49:31 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 10 09:49:31 compute-0 ceph-mon[73551]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 925 B/s wr, 2 op/s
Oct 10 09:49:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:49:31 compute-0 ceph-mon[73551]: osdmap e139: 3 total, 3 up, 3 in
Oct 10 09:49:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:31 compute-0 ceph-mon[73551]: osdmap e140: 3 total, 3 up, 3 in
Oct 10 09:49:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 10 09:49:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 10 09:49:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 10 09:49:32 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:49:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:33.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:49:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 10 09:49:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-0 sudo[106719]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:33 compute-0 ceph-mon[73551]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 10 09:49:33 compute-0 ceph-mon[73551]: osdmap e141: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-0 ceph-mon[73551]: osdmap e142: 3 total, 3 up, 3 in
Oct 10 09:49:34 compute-0 sudo[107264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofajdmfmeiubsfxmuaogaqsjjmoipkgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089773.7343028-342-167151874451653/AnsiballZ_command.py'
Oct 10 09:49:34 compute-0 sudo[107264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:34 compute-0 python3.9[107266]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:49:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 10 09:49:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 10 09:49:34 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:49:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:35 compute-0 sudo[107264]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:49:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:35.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:49:35 compute-0 ceph-mon[73551]: pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:35 compute-0 ceph-mon[73551]: osdmap e143: 3 total, 3 up, 3 in
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.524385) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775524437, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2901, "num_deletes": 252, "total_data_size": 6427563, "memory_usage": 6602880, "flush_reason": "Manual Compaction"}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775552997, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6118759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7921, "largest_seqno": 10821, "table_properties": {"data_size": 6104678, "index_size": 9103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34854, "raw_average_key_size": 22, "raw_value_size": 6074200, "raw_average_value_size": 3949, "num_data_blocks": 395, "num_entries": 1538, "num_filter_entries": 1538, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089675, "oldest_key_time": 1760089675, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 28654 microseconds, and 11058 cpu microseconds.
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.553046) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6118759 bytes OK
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.553068) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.556454) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.556475) EVENT_LOG_v1 {"time_micros": 1760089775556469, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.556497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6414180, prev total WAL file size 6414180, number of live WAL files 2.
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.558069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5975KB)], [23(10MB)]
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775558174, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17453507, "oldest_snapshot_seqno": -1}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4075 keys, 13425783 bytes, temperature: kUnknown
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775655213, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13425783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13393270, "index_size": 21203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104085, "raw_average_key_size": 25, "raw_value_size": 13313446, "raw_average_value_size": 3267, "num_data_blocks": 912, "num_entries": 4075, "num_filter_entries": 4075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.655467) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13425783 bytes
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.656933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.3 rd, 138.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.8, 10.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(5.0) write-amplify(2.2) OK, records in: 4613, records dropped: 538 output_compression: NoCompression
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.656951) EVENT_LOG_v1 {"time_micros": 1760089775656941, "job": 8, "event": "compaction_finished", "compaction_time_micros": 96796, "compaction_time_cpu_micros": 53262, "output_level": 6, "num_output_files": 1, "total_output_size": 13425783, "num_input_records": 4613, "num_output_records": 4075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775657925, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775659653, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.557913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.659764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.659773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.659775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.659776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:49:35.659778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:36 compute-0 sudo[107569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgmzvqsjxnoaqvxxyvcaobojujwkilfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089775.5522056-366-207011172651667/AnsiballZ_selinux.py'
Oct 10 09:49:36 compute-0 sudo[107569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:36 compute-0 python3.9[107571]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 09:49:36 compute-0 sudo[107569]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:49:36.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:49:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:37 compute-0 sudo[107722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqnruotfdeunmpdzpatrdqfxvzcnelos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089777.0028567-399-187037558698325/AnsiballZ_command.py'
Oct 10 09:49:37 compute-0 sudo[107722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:37] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:37] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:37 compute-0 ceph-mon[73551]: pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:37 compute-0 python3.9[107724]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 09:49:37 compute-0 sudo[107722]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:37 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:49:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:37 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:49:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s; 18 B/s, 1 objects/s recovering
Oct 10 09:49:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e0001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:38 compute-0 sudo[107875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlkjjjsqtompfinklvsvivpgirhwnils ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089777.822012-423-83590180342324/AnsiballZ_file.py'
Oct 10 09:49:38 compute-0 sudo[107875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:38 compute-0 python3.9[107877]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:49:38 compute-0 sudo[107875]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/094938 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:49:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:39 compute-0 sudo[108028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvqmdwxtdsphwkbiptsbmyxjzlqalto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089778.7950816-447-278143503535820/AnsiballZ_mount.py'
Oct 10 09:49:39 compute-0 sudo[108028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:39.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:39 compute-0 python3.9[108030]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 09:49:39 compute-0 sudo[108028]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:39 compute-0 ceph-mon[73551]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s; 18 B/s, 1 objects/s recovering
Oct 10 09:49:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 829 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct 10 09:49:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:49:40 compute-0 sudo[108181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhnrhavvrkwnpzvxqfjfvbbkiietopdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089780.4287922-531-275254121606582/AnsiballZ_file.py'
Oct 10 09:49:40 compute-0 sudo[108181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:40 compute-0 python3.9[108184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:40 compute-0 sudo[108181]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:49:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:49:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:49:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:49:41 compute-0 sudo[108334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqxxksefmfdwrrqtdztscstgplsvjbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089781.2121172-555-163742750146309/AnsiballZ_stat.py'
Oct 10 09:49:41 compute-0 sudo[108334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:41 compute-0 ceph-mon[73551]: pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 829 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct 10 09:49:41 compute-0 python3.9[108336]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:41 compute-0 sudo[108334]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:42 compute-0 sudo[108413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enghrharvgacmipbkoiewwejxzzrtesi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089781.2121172-555-163742750146309/AnsiballZ_file.py'
Oct 10 09:49:42 compute-0 sudo[108413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.0 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Oct 10 09:49:42 compute-0 python3.9[108415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:49:42 compute-0 sudo[108413]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e0001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:49:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:43.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:49:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:43.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:43 compute-0 ceph-mon[73551]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.0 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Oct 10 09:49:43 compute-0 sudo[108566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyoqcdcoogeovvisquqzyewgmamltrcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089783.3288028-627-49189012576463/AnsiballZ_getent.py'
Oct 10 09:49:43 compute-0 sudo[108566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/094943 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:49:44 compute-0 python3.9[108568]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 09:49:44 compute-0 sudo[108566]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 921 B/s wr, 3 op/s; 10 B/s, 0 objects/s recovering
Oct 10 09:49:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:44 compute-0 sudo[108631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:44 compute-0 sudo[108631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:44 compute-0 sudo[108631]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:44 compute-0 sudo[108679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:49:44 compute-0 sudo[108679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:44 compute-0 sudo[108770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvcifwiqwhtgquytdojzndfvhsjhyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089784.4421308-657-134811880831129/AnsiballZ_getent.py'
Oct 10 09:49:44 compute-0 sudo[108770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e0002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:44 compute-0 python3.9[108772]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 09:49:44 compute-0 sudo[108770]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:45.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:45 compute-0 podman[108872]: 2025-10-10 09:49:45.149501721 +0000 UTC m=+0.077811826 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:49:45 compute-0 podman[108872]: 2025-10-10 09:49:45.24988014 +0000 UTC m=+0.178190235 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:45.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:45 compute-0 ceph-mon[73551]: pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 921 B/s wr, 3 op/s; 10 B/s, 0 objects/s recovering
Oct 10 09:49:45 compute-0 podman[109087]: 2025-10-10 09:49:45.76166058 +0000 UTC m=+0.067237957 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:45 compute-0 sudo[109130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfqcjucugctimnspnryxnlvafptmdjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089785.289985-681-142032232529069/AnsiballZ_group.py'
Oct 10 09:49:45 compute-0 sudo[109130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:45 compute-0 podman[109087]: 2025-10-10 09:49:45.798694857 +0000 UTC m=+0.104272234 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:45 compute-0 python3.9[109140]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:49:46 compute-0 sudo[109130]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:46 compute-0 podman[109226]: 2025-10-10 09:49:46.22199471 +0000 UTC m=+0.081078590 container exec 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 785 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:46 compute-0 podman[109226]: 2025-10-10 09:49:46.242837668 +0000 UTC m=+0.101921538 container exec_died 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:49:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:49:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f4fa4dc45e0>)]
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f4fa4dc45b0>)]
Oct 10 09:49:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 10 09:49:46 compute-0 podman[109343]: 2025-10-10 09:49:46.494628092 +0000 UTC m=+0.052451853 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:49:46 compute-0 podman[109343]: 2025-10-10 09:49:46.515845923 +0000 UTC m=+0.073669694 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 09:49:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:46 compute-0 sudo[109474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fggruaigryzraaofjzkdazgkratdbybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089786.3432426-708-172287626698539/AnsiballZ_file.py'
Oct 10 09:49:46 compute-0 sudo[109474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:46 compute-0 podman[109484]: 2025-10-10 09:49:46.768880547 +0000 UTC m=+0.076898788 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 09:49:46 compute-0 podman[109484]: 2025-10-10 09:49:46.789811317 +0000 UTC m=+0.097829588 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793)
Oct 10 09:49:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:46 compute-0 python3.9[109483]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 09:49:46 compute-0 sudo[109474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:49:46.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:49:47 compute-0 podman[109561]: 2025-10-10 09:49:47.058458891 +0000 UTC m=+0.060044125 container exec e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:49:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:49:47 compute-0 podman[109561]: 2025-10-10 09:49:47.090758028 +0000 UTC m=+0.092343222 container exec_died e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:47 compute-0 podman[109648]: 2025-10-10 09:49:47.368678998 +0000 UTC m=+0.074850910 container exec 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:49:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:47] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:47] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:47.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:47 compute-0 podman[109648]: 2025-10-10 09:49:47.568468045 +0000 UTC m=+0.274639967 container exec_died 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 09:49:47 compute-0 sudo[109811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxfcxxqvamynwjuvpcemkgczgatoznkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089787.3315663-741-119413294539154/AnsiballZ_dnf.py'
Oct 10 09:49:47 compute-0 sudo[109811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:47 compute-0 ceph-mon[73551]: pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 785 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:47 compute-0 python3.9[109818]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:49:47 compute-0 podman[109888]: 2025-10-10 09:49:47.945410671 +0000 UTC m=+0.054438996 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:47 compute-0 podman[109888]: 2025-10-10 09:49:47.98278311 +0000 UTC m=+0.091811425 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:49:48 compute-0 sudo[108679]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:48 compute-0 sudo[109931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:48 compute-0 sudo[109931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:48 compute-0 sudo[109931]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:48 compute-0 sudo[109956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:49:48 compute-0 sudo[109956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e0002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:48 compute-0 sudo[109956]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:49:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:49:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:48 compute-0 sudo[110013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:48 compute-0 sudo[110013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:48 compute-0 sudo[110013]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:48 compute-0 sudo[110038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:49:48 compute-0 sudo[110038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-0 ceph-mon[73551]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:49:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:49 compute-0 sudo[109811]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:49 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.xkdepb(active, since 92s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.323246592 +0000 UTC m=+0.050424508 container create 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:49:49 compute-0 systemd[1]: Started libpod-conmon-8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d.scope.
Oct 10 09:49:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.303731166 +0000 UTC m=+0.030909112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.417873946 +0000 UTC m=+0.145051882 container init 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.425839071 +0000 UTC m=+0.153016987 container start 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.429413466 +0000 UTC m=+0.156591422 container attach 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:49:49 compute-0 awesome_davinci[110170]: 167 167
Oct 10 09:49:49 compute-0 systemd[1]: libpod-8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d.scope: Deactivated successfully.
Oct 10 09:49:49 compute-0 conmon[110170]: conmon 8077228d3ae30fbca1e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d.scope/container/memory.events
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.433847478 +0000 UTC m=+0.161025394 container died 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5179f6dbdaf2dac73188c8394ca2141b4d1d96958518fe259ba5c1d1f0138e88-merged.mount: Deactivated successfully.
Oct 10 09:49:49 compute-0 podman[110128]: 2025-10-10 09:49:49.476388572 +0000 UTC m=+0.203566488 container remove 8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:49:49 compute-0 systemd[1]: libpod-conmon-8077228d3ae30fbca1e3acc0285943da0b2ea2939256ae0523b931576d02035d.scope: Deactivated successfully.
Oct 10 09:49:49 compute-0 sudo[110303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwanoxieswpqftdbumxvusmmmweagrnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089789.3453999-765-106693727353459/AnsiballZ_file.py'
Oct 10 09:49:49 compute-0 sudo[110303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:49 compute-0 podman[110277]: 2025-10-10 09:49:49.653347546 +0000 UTC m=+0.053832947 container create 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:49 compute-0 systemd[1]: Started libpod-conmon-276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356.scope.
Oct 10 09:49:49 compute-0 podman[110277]: 2025-10-10 09:49:49.628384736 +0000 UTC m=+0.028870197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:49 compute-0 podman[110277]: 2025-10-10 09:49:49.756737571 +0000 UTC m=+0.157222962 container init 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:49:49 compute-0 podman[110277]: 2025-10-10 09:49:49.768189529 +0000 UTC m=+0.168674890 container start 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:49:49 compute-0 podman[110277]: 2025-10-10 09:49:49.771708782 +0000 UTC m=+0.172194173 container attach 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:49:49 compute-0 python3.9[110306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:49 compute-0 sudo[110303]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-0 ceph-mon[73551]: mgrmap e33: compute-0.xkdepb(active, since 92s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:49:50 compute-0 pensive_swanson[110310]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:49:50 compute-0 pensive_swanson[110310]: --> All data devices are unavailable
Oct 10 09:49:50 compute-0 systemd[1]: libpod-276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356.scope: Deactivated successfully.
Oct 10 09:49:50 compute-0 podman[110277]: 2025-10-10 09:49:50.170740666 +0000 UTC m=+0.571226047 container died 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b4d4dbfda5fc1225a11acbe22ed9d7264ec21bdb70fb84c19f3a762c5ab9985-merged.mount: Deactivated successfully.
Oct 10 09:49:50 compute-0 podman[110277]: 2025-10-10 09:49:50.221033659 +0000 UTC m=+0.621519020 container remove 276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swanson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:49:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:50 compute-0 systemd[1]: libpod-conmon-276757ccc28dd9a3d6e7d106b6ba6b4b5754cb0cda903a7132bb098765eaf356.scope: Deactivated successfully.
Oct 10 09:49:50 compute-0 sudo[110038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:50 compute-0 sudo[110419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:50 compute-0 sudo[110419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:50 compute-0 sudo[110419]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-0 sudo[110469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:49:50 compute-0 sudo[110469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:50 compute-0 sudo[110536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvvbsojccvruweksthsllydblewqewgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089790.1447756-789-215322183561120/AnsiballZ_stat.py'
Oct 10 09:49:50 compute-0 sudo[110536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:50 compute-0 sudo[110537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:50 compute-0 sudo[110537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:50 compute-0 sudo[110537]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e0002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:50 compute-0 python3.9[110545]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:50 compute-0 sudo[110536]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.832650431 +0000 UTC m=+0.040551732 container create a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:49:50 compute-0 systemd[1]: Started libpod-conmon-a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd.scope.
Oct 10 09:49:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.817092191 +0000 UTC m=+0.024993512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.922300475 +0000 UTC m=+0.130201806 container init a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.93055071 +0000 UTC m=+0.138452021 container start a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.934829126 +0000 UTC m=+0.142730427 container attach a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:49:50 compute-0 reverent_hopper[110670]: 167 167
Oct 10 09:49:50 compute-0 systemd[1]: libpod-a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd.scope: Deactivated successfully.
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.938309958 +0000 UTC m=+0.146211259 container died a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:49:50 compute-0 sudo[110703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfztudcjdbvaalfenztaiwycakikekxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089790.1447756-789-215322183561120/AnsiballZ_file.py'
Oct 10 09:49:50 compute-0 sudo[110703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1424dd737748a6d8f3b19d2c1825a850e5ed239ea2e053ddac48296342a3ddf-merged.mount: Deactivated successfully.
Oct 10 09:49:50 compute-0 podman[110615]: 2025-10-10 09:49:50.990095009 +0000 UTC m=+0.197996330 container remove a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:49:51 compute-0 systemd[1]: libpod-conmon-a78d51c0481be92ba58e3deb61fbb29f9d79987e18bac596bbd48f8c355633dd.scope: Deactivated successfully.
Oct 10 09:49:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 09:49:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 09:49:51 compute-0 ceph-mon[73551]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.179593835 +0000 UTC m=+0.052879936 container create 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:49:51 compute-0 python3.9[110711]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:51 compute-0 sudo[110703]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:51 compute-0 systemd[1]: Started libpod-conmon-13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b.scope.
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.154113198 +0000 UTC m=+0.027399379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:51 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0650d181e53365760dc7bf8c02188b810364678056a25534d699146f9616fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0650d181e53365760dc7bf8c02188b810364678056a25534d699146f9616fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0650d181e53365760dc7bf8c02188b810364678056a25534d699146f9616fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0650d181e53365760dc7bf8c02188b810364678056a25534d699146f9616fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.296963729 +0000 UTC m=+0.170249940 container init 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.310399989 +0000 UTC m=+0.183686090 container start 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.313614762 +0000 UTC m=+0.186900893 container attach 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:49:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:49:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]: {
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:     "0": [
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:         {
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "devices": [
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "/dev/loop3"
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             ],
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "lv_name": "ceph_lv0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "lv_size": "21470642176",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "name": "ceph_lv0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "tags": {
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.cluster_name": "ceph",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.crush_device_class": "",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.encrypted": "0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.osd_id": "0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.type": "block",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.vdo": "0",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:                 "ceph.with_tpm": "0"
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             },
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "type": "block",
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:             "vg_name": "ceph_vg0"
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:         }
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]:     ]
Oct 10 09:49:51 compute-0 sleepy_blackburn[110739]: }
Oct 10 09:49:51 compute-0 systemd[1]: libpod-13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b.scope: Deactivated successfully.
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.638312684 +0000 UTC m=+0.511598775 container died 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0650d181e53365760dc7bf8c02188b810364678056a25534d699146f9616fe-merged.mount: Deactivated successfully.
Oct 10 09:49:51 compute-0 podman[110723]: 2025-10-10 09:49:51.684965979 +0000 UTC m=+0.558252070 container remove 13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:51 compute-0 systemd[1]: libpod-conmon-13558e27218203c4629179f48016fa6b3196f32fbcab0efb6436dc7f2394dc5b.scope: Deactivated successfully.
Oct 10 09:49:51 compute-0 sudo[110469]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:51 compute-0 sudo[110879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:51 compute-0 sudo[110879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:51 compute-0 sudo[110879]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:51 compute-0 sudo[110932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppqasdfpsglomrpnufvjecpxwktsrmte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089791.5390306-828-201462944787199/AnsiballZ_stat.py'
Oct 10 09:49:51 compute-0 sudo[110932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:51 compute-0 sudo[110934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:49:51 compute-0 sudo[110934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:52 compute-0 python3.9[110937]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:52 compute-0 sudo[110932]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:52 compute-0 sudo[111091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibynkkjswckwuupcgprgbmfimmtwcuer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089791.5390306-828-201462944787199/AnsiballZ_file.py'
Oct 10 09:49:52 compute-0 sudo[111091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.324741504 +0000 UTC m=+0.045904963 container create 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:49:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:52 compute-0 systemd[1]: Started libpod-conmon-7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2.scope.
Oct 10 09:49:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.305662923 +0000 UTC m=+0.026826382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.405515654 +0000 UTC m=+0.126679133 container init 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.414499582 +0000 UTC m=+0.135663041 container start 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 09:49:52 compute-0 serene_montalcini[111098]: 167 167
Oct 10 09:49:52 compute-0 systemd[1]: libpod-7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2.scope: Deactivated successfully.
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.421686782 +0000 UTC m=+0.142850271 container attach 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.42254445 +0000 UTC m=+0.143707919 container died 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee144695f1c2e48c084ad1148f8e46ee2cd97b092c5656b4babf1a83becb2c0e-merged.mount: Deactivated successfully.
Oct 10 09:49:52 compute-0 podman[111063]: 2025-10-10 09:49:52.459474805 +0000 UTC m=+0.180638284 container remove 7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_montalcini, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:49:52 compute-0 systemd[1]: libpod-conmon-7f2a3232094a328938b474ea2f9505658b001637ba74162acd715e760ccca3e2.scope: Deactivated successfully.
Oct 10 09:49:52 compute-0 python3.9[111095]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:52 compute-0 sudo[111091]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:52 compute-0 podman[111122]: 2025-10-10 09:49:52.612137619 +0000 UTC m=+0.041108899 container create 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:49:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:52 compute-0 systemd[1]: Started libpod-conmon-776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37.scope.
Oct 10 09:49:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c26b2dbe1cdcff5298a98d527f77c6f8059622ffcaadd7a9a70c4b71e63735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c26b2dbe1cdcff5298a98d527f77c6f8059622ffcaadd7a9a70c4b71e63735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:52 compute-0 podman[111122]: 2025-10-10 09:49:52.5968605 +0000 UTC m=+0.025831440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c26b2dbe1cdcff5298a98d527f77c6f8059622ffcaadd7a9a70c4b71e63735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c26b2dbe1cdcff5298a98d527f77c6f8059622ffcaadd7a9a70c4b71e63735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:49:52 compute-0 podman[111122]: 2025-10-10 09:49:52.708225461 +0000 UTC m=+0.137196451 container init 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:49:52 compute-0 podman[111122]: 2025-10-10 09:49:52.729772721 +0000 UTC m=+0.158743671 container start 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:49:52 compute-0 podman[111122]: 2025-10-10 09:49:52.733981367 +0000 UTC m=+0.162952317 container attach 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:49:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:53.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:53 compute-0 ceph-mon[73551]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:53 compute-0 lvm[111336]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:49:53 compute-0 lvm[111336]: VG ceph_vg0 finished
Oct 10 09:49:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:53.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:53 compute-0 sudo[111364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-annatqxqhdzrjcmiydelsvmujxetphnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089793.1338942-873-114454888149085/AnsiballZ_dnf.py'
Oct 10 09:49:53 compute-0 sudo[111364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:53 compute-0 pensive_liskov[111162]: {}
Oct 10 09:49:53 compute-0 systemd[1]: libpod-776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37.scope: Deactivated successfully.
Oct 10 09:49:53 compute-0 podman[111122]: 2025-10-10 09:49:53.450778391 +0000 UTC m=+0.879749321 container died 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:49:53 compute-0 systemd[1]: libpod-776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37.scope: Consumed 1.172s CPU time.
Oct 10 09:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48c26b2dbe1cdcff5298a98d527f77c6f8059622ffcaadd7a9a70c4b71e63735-merged.mount: Deactivated successfully.
Oct 10 09:49:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:53 compute-0 podman[111122]: 2025-10-10 09:49:53.49722281 +0000 UTC m=+0.926193730 container remove 776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_liskov, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:49:53 compute-0 systemd[1]: libpod-conmon-776e7e1d8fe77ddcd86652b9c81a53576428bd1b1c4089d91c18d51105ce1e37.scope: Deactivated successfully.
Oct 10 09:49:53 compute-0 sudo[110934]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:49:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:49:53 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:53 compute-0 python3.9[111368]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:49:53 compute-0 sudo[111380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:49:53 compute-0 sudo[111380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:53 compute-0 sudo[111380]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:54 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:54 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:54 compute-0 sudo[111364]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:55.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:55 compute-0 ceph-mon[73551]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:55 compute-0 python3.9[111557]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:49:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:56 compute-0 python3.9[111710]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 09:49:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:49:56.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:49:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:49:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:57.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:49:57 compute-0 python3.9[111861]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:49:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:57] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:49:57] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 10 09:49:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:57 compute-0 ceph-mon[73551]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:58 compute-0 sudo[112013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lssmzmclpuynurzuzbgyprgfwwetddcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089798.1715007-996-194301887848743/AnsiballZ_systemd.py'
Oct 10 09:49:58 compute-0 sudo[112013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:49:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:59 compute-0 python3.9[112015]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:49:59 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 09:49:59 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 09:49:59 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 09:49:59 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:49:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:49:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:49:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:49:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:49:59 compute-0 sudo[112013]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:59 compute-0 ceph-mon[73551]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:50:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 09:50:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:00 compute-0 python3.9[112177]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 09:50:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:00 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 09:50:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:50:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:01.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:01 compute-0 ceph-mon[73551]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:03.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:03 compute-0 sudo[112330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwlikdblhzberxhyruodeobiweclysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089803.1920326-1167-269307001038136/AnsiballZ_systemd.py'
Oct 10 09:50:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:03 compute-0 sudo[112330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:03 compute-0 ceph-mon[73551]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:03 compute-0 python3.9[112332]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:50:03 compute-0 sudo[112330]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:04 compute-0 sudo[112485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gieicotkkwujqjkipkcwchcnevxeextn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089804.1174788-1167-154376377427944/AnsiballZ_systemd.py'
Oct 10 09:50:04 compute-0 sudo[112485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:04 compute-0 python3.9[112487]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:50:04 compute-0 sudo[112485]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64dc002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:05 compute-0 sshd-session[103483]: Connection closed by 192.168.122.30 port 45494
Oct 10 09:50:05 compute-0 sshd-session[103454]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:05 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 10 09:50:05 compute-0 systemd[1]: session-40.scope: Consumed 1min 5.209s CPU time.
Oct 10 09:50:05 compute-0 systemd-logind[806]: Session 40 logged out. Waiting for processes to exit.
Oct 10 09:50:05 compute-0 systemd-logind[806]: Removed session 40.
Oct 10 09:50:05 compute-0 ceph-mon[73551]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:06.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:50:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:07] "GET /metrics HTTP/1.1" 200 48246 "" "Prometheus/2.51.0"
Oct 10 09:50:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:07] "GET /metrics HTTP/1.1" 200 48246 "" "Prometheus/2.51.0"
Oct 10 09:50:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:07.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:07 compute-0 ceph-mon[73551]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:09 compute-0 ceph-mon[73551]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:10 compute-0 sudo[112523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:10 compute-0 sudo[112523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:10 compute-0 sudo[112523]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:11 compute-0 sshd-session[112549]: Accepted publickey for zuul from 192.168.122.30 port 41556 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:11 compute-0 systemd-logind[806]: New session 41 of user zuul.
Oct 10 09:50:11 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 10 09:50:11 compute-0 sshd-session[112549]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:11.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=infra.usagestats t=2025-10-10T09:50:11.752486059Z level=info msg="Usage stats are ready to report"
Oct 10 09:50:11 compute-0 ceph-mon[73551]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:12 compute-0 python3.9[112702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:13 compute-0 sudo[112858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tffckkuhggfykqcehqmihrkednckeoue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089812.8887854-68-6615858136768/AnsiballZ_getent.py'
Oct 10 09:50:13 compute-0 sudo[112858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:13.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:13 compute-0 python3.9[112860]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 09:50:13 compute-0 sudo[112858]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:13 compute-0 ceph-mon[73551]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:14 compute-0 sudo[113012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feehfddedddnmsxhgnzfwpxojtjdlvvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089814.1962445-104-128986491461041/AnsiballZ_setup.py'
Oct 10 09:50:14 compute-0 sudo[113012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:14 compute-0 python3.9[113014]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:15 compute-0 sudo[113012]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:15.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:15 compute-0 sudo[113097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bydzaddgxmjuarkxndrsmeucywkvxwtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089814.1962445-104-128986491461041/AnsiballZ_dnf.py'
Oct 10 09:50:15 compute-0 sudo[113097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:15 compute-0 python3.9[113099]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:50:15 compute-0 ceph-mon[73551]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:50:16
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.nfs', '.mgr']
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:50:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:50:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:50:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:16.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:50:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:17 compute-0 sudo[113097]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:17] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:50:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:17] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:50:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:17 compute-0 sudo[113252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjiplowufsyecqkidqilpazfbonwcib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089817.455386-146-266138297788672/AnsiballZ_dnf.py'
Oct 10 09:50:17 compute-0 sudo[113252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:17 compute-0 ceph-mon[73551]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:17 compute-0 python3.9[113254]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:19.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:19 compute-0 sudo[113252]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:19.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:19 compute-0 ceph-mon[73551]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:20 compute-0 sudo[113408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvbdlsoszricpxemjkkgqmdvbmnlrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089819.461471-170-50117061615038/AnsiballZ_systemd.py'
Oct 10 09:50:20 compute-0 sudo[113408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:20 compute-0 python3.9[113410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:50:20 compute-0 sudo[113408]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:21.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:21 compute-0 python3.9[113564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:21 compute-0 ceph-mon[73551]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:22 compute-0 sudo[113715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fygxuhdpgslrpdsniuiunahwjvaxdhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089821.740568-224-257217448776667/AnsiballZ_sefcontext.py'
Oct 10 09:50:22 compute-0 sudo[113715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:22 compute-0 python3.9[113717]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 09:50:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:22 compute-0 ceph-mon[73551]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:22 compute-0 sudo[113715]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:23.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:23 compute-0 python3.9[113868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:24 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:24 compute-0 sudo[114025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwumivysuidzktnihtyzocpidxvyagmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089824.3951952-278-109440351158869/AnsiballZ_dnf.py'
Oct 10 09:50:24 compute-0 sudo[114025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:24 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:24 compute-0 python3.9[114027]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:24 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:25 compute-0 ceph-mon[73551]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:25.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:26 compute-0 sudo[114025]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:26.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:50:27 compute-0 sudo[114181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowlhehonmguthhpjvvevzsfrpwrihci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089826.4757643-302-230678757717347/AnsiballZ_command.py'
Oct 10 09:50:27 compute-0 sudo[114181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:27 compute-0 python3.9[114183]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:50:27 compute-0 ceph-mon[73551]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:27] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:50:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:27] "GET /metrics HTTP/1.1" 200 48245 "" "Prometheus/2.51.0"
Oct 10 09:50:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:27.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:28 compute-0 sudo[114181]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:28 compute-0 sudo[114470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhovepjlbqwtfloceepxsjntyqjllzdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089828.4701135-326-264730536758015/AnsiballZ_file.py'
Oct 10 09:50:28 compute-0 sudo[114470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:29 compute-0 python3.9[114472]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 09:50:29 compute-0 sudo[114470]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:29 compute-0 ceph-mon[73551]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:29.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:30 compute-0 python3.9[114622]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:50:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:30 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:30 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:30 compute-0 sudo[114749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:30 compute-0 sudo[114749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:30 compute-0 sudo[114749]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:30 compute-0 sudo[114798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obmizkfemubajvzqnjvoliswdcdyugwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089830.394477-374-276641132796953/AnsiballZ_dnf.py'
Oct 10 09:50:30 compute-0 sudo[114798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:30 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:30 compute-0 python3.9[114802]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:50:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:31 compute-0 ceph-mon[73551]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:31.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:32 compute-0 sudo[114798]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:32 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:32 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:32 compute-0 sudo[114956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyphlcjrtwtkjnwkoswdorirfptdtxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089832.5846477-401-222798492586608/AnsiballZ_dnf.py'
Oct 10 09:50:32 compute-0 sudo[114956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:32 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:33 compute-0 python3.9[114958]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:33 compute-0 ceph-mon[73551]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:33.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:34 compute-0 sudo[114956]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:35 compute-0 sudo[115111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxythrsqlilpcxjfuqainphjequntkpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089834.8223937-437-134954585010799/AnsiballZ_stat.py'
Oct 10 09:50:35 compute-0 sudo[115111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:35.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:35 compute-0 python3.9[115113]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:50:35 compute-0 sudo[115111]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:35 compute-0 ceph-mon[73551]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:35.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:36 compute-0 sudo[115266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbagrcvgierarjjkuvrbgcqsruvndky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089835.7419567-461-118459075051074/AnsiballZ_slurp.py'
Oct 10 09:50:36 compute-0 sudo[115266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:36 compute-0 python3.9[115268]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:36 compute-0 sudo[115266]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:36 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:36.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:36.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:50:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:36.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:50:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:37 compute-0 sshd-session[112552]: Connection closed by 192.168.122.30 port 41556
Oct 10 09:50:37 compute-0 sshd-session[112549]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:37 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 10 09:50:37 compute-0 systemd[1]: session-41.scope: Consumed 18.967s CPU time.
Oct 10 09:50:37 compute-0 systemd-logind[806]: Session 41 logged out. Waiting for processes to exit.
Oct 10 09:50:37 compute-0 systemd-logind[806]: Removed session 41.
Oct 10 09:50:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:50:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:50:37 compute-0 ceph-mon[73551]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:37.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:38 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:39 compute-0 ceph-mon[73551]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:39.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:40 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:50:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:41.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:50:41 compute-0 ceph-mon[73551]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095042 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:50:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:50:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:42 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:43 compute-0 sshd-session[115303]: Accepted publickey for zuul from 192.168.122.30 port 48702 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:43 compute-0 systemd-logind[806]: New session 42 of user zuul.
Oct 10 09:50:43 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 10 09:50:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:43 compute-0 sshd-session[115303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:43.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:43 compute-0 ceph-mon[73551]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:50:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:44 compute-0 python3.9[115457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:44 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:45.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:45 compute-0 ceph-mon[73551]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:45 compute-0 python3.9[115612]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:50:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:50:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:50:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:46 compute-0 python3.9[115806]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:50:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:46.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:50:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:50:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:46 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:47 compute-0 sshd-session[115306]: Connection closed by 192.168.122.30 port 48702
Oct 10 09:50:47 compute-0 sshd-session[115303]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:47 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 10 09:50:47 compute-0 systemd[1]: session-42.scope: Consumed 2.508s CPU time.
Oct 10 09:50:47 compute-0 systemd-logind[806]: Session 42 logged out. Waiting for processes to exit.
Oct 10 09:50:47 compute-0 systemd-logind[806]: Removed session 42.
Oct 10 09:50:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:47] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:50:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:47] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:50:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:47.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:47 compute-0 ceph-mon[73551]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:48 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:49.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:49 compute-0 ceph-mon[73551]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:50 compute-0 sudo[115837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:50 compute-0 sudo[115837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:50 compute-0 sudo[115837]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:50 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:51.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:51 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:50:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:51.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:51 compute-0 ceph-mon[73551]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:52 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:53.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:53 compute-0 sshd-session[115864]: Accepted publickey for zuul from 192.168.122.30 port 34092 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:53 compute-0 systemd-logind[806]: New session 43 of user zuul.
Oct 10 09:50:53 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 10 09:50:53 compute-0 sshd-session[115864]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:53.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:53 compute-0 ceph-mon[73551]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:53 compute-0 sudo[115944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:50:53 compute-0 sudo[115944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:53 compute-0 sudo[115944]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:54 compute-0 sudo[115992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:50:54 compute-0 sudo[115992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:50:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:50:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:54 compute-0 python3.9[116068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:54 compute-0 sudo[115992]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:50:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:50:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:50:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:54 compute-0 sudo[116125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:50:54 compute-0 sudo[116125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:54 compute-0 sudo[116125]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:54 compute-0 sudo[116154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:50:54 compute-0 sudo[116154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:54 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.19256707 +0000 UTC m=+0.045299246 container create 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:50:55 compute-0 systemd[1]: Started libpod-conmon-26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c.scope.
Oct 10 09:50:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.174093364 +0000 UTC m=+0.026825450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.28098483 +0000 UTC m=+0.133716916 container init 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.289833471 +0000 UTC m=+0.142565527 container start 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.292683791 +0000 UTC m=+0.145415867 container attach 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:50:55 compute-0 unruffled_pike[116359]: 167 167
Oct 10 09:50:55 compute-0 systemd[1]: libpod-26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c.scope: Deactivated successfully.
Oct 10 09:50:55 compute-0 conmon[116359]: conmon 26398b3491a2e82de8b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c.scope/container/memory.events
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.300309883 +0000 UTC m=+0.153041959 container died 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd800b6369cd25abd50df4f615d1f2f37c71af7be4d6dc84df79923019285e7-merged.mount: Deactivated successfully.
Oct 10 09:50:55 compute-0 podman[116316]: 2025-10-10 09:50:55.345421192 +0000 UTC m=+0.198153248 container remove 26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:50:55 compute-0 systemd[1]: libpod-conmon-26398b3491a2e82de8b87be2a27f3c626ede828bc2589862058e1acdbd112b7c.scope: Deactivated successfully.
Oct 10 09:50:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:55.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:55 compute-0 python3.9[116354]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.494762883 +0000 UTC m=+0.036633002 container create 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:50:55 compute-0 systemd[1]: Started libpod-conmon-0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8.scope.
Oct 10 09:50:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.571690429 +0000 UTC m=+0.113560548 container init 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.479287742 +0000 UTC m=+0.021157861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.584918589 +0000 UTC m=+0.126788728 container start 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.590657011 +0000 UTC m=+0.132527120 container attach 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:50:55 compute-0 ceph-mon[73551]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:50:55 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:55 compute-0 romantic_khayyam[116403]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:50:55 compute-0 romantic_khayyam[116403]: --> All data devices are unavailable
Oct 10 09:50:55 compute-0 systemd[1]: libpod-0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8.scope: Deactivated successfully.
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.950569711 +0000 UTC m=+0.492439840 container died 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-422184e20d6f06880692692093b030bfb4913a4adf057544bd5f9d072132468d-merged.mount: Deactivated successfully.
Oct 10 09:50:55 compute-0 podman[116383]: 2025-10-10 09:50:55.991951992 +0000 UTC m=+0.533822111 container remove 0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:50:56 compute-0 systemd[1]: libpod-conmon-0dd8b82e072256038b23b67f9b380ebc332afc2fa6a3f5f4247fb7df23f0f3f8.scope: Deactivated successfully.
Oct 10 09:50:56 compute-0 sudo[116154]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:56 compute-0 sudo[116532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:50:56 compute-0 sudo[116532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:56 compute-0 sudo[116532]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:56 compute-0 sudo[116580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:50:56 compute-0 sudo[116580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:56 compute-0 sudo[116632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsijiltodzvfqxztdpbkcionuprlugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089855.9292648-80-218082438226185/AnsiballZ_setup.py'
Oct 10 09:50:56 compute-0 sudo[116632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:56 compute-0 python3.9[116634]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.639800674 +0000 UTC m=+0.067048434 container create b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:50:56 compute-0 systemd[1]: Started libpod-conmon-b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f.scope.
Oct 10 09:50:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.612523321 +0000 UTC m=+0.039771161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.729809446 +0000 UTC m=+0.157057216 container init b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.742056013 +0000 UTC m=+0.169303763 container start b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:50:56 compute-0 relaxed_lumiere[116700]: 167 167
Oct 10 09:50:56 compute-0 systemd[1]: libpod-b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f.scope: Deactivated successfully.
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.753353901 +0000 UTC m=+0.180601661 container attach b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.753795605 +0000 UTC m=+0.181043355 container died b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 09:50:56 compute-0 sudo[116632]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c04b508da9359e202b0672a22cbd395ccde8c7268f8ec71233ef6a9a2631126a-merged.mount: Deactivated successfully.
Oct 10 09:50:56 compute-0 podman[116683]: 2025-10-10 09:50:56.790347693 +0000 UTC m=+0.217595443 container remove b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:50:56 compute-0 systemd[1]: libpod-conmon-b1546380586745d24af213d027e775976478710107161d5b4e56126ce7b46b1f.scope: Deactivated successfully.
Oct 10 09:50:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:50:56.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:50:56 compute-0 podman[116724]: 2025-10-10 09:50:56.971219382 +0000 UTC m=+0.042630031 container create b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:50:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:56 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:57 compute-0 systemd[1]: Started libpod-conmon-b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22.scope.
Oct 10 09:50:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:56.951677363 +0000 UTC m=+0.023088002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c982cb3c6d82e8f1384e06b1c5f6910bbc5e02ccbb240b01e1e0549275ebe96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c982cb3c6d82e8f1384e06b1c5f6910bbc5e02ccbb240b01e1e0549275ebe96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c982cb3c6d82e8f1384e06b1c5f6910bbc5e02ccbb240b01e1e0549275ebe96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c982cb3c6d82e8f1384e06b1c5f6910bbc5e02ccbb240b01e1e0549275ebe96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:57.068848036 +0000 UTC m=+0.140258835 container init b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:57.075573148 +0000 UTC m=+0.146983767 container start b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:57.079297567 +0000 UTC m=+0.150708276 container attach b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:50:57 compute-0 sudo[116818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcolvcgiqoqettthznhoiijihgcjwgwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089855.9292648-80-218082438226185/AnsiballZ_dnf.py'
Oct 10 09:50:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:57 compute-0 sudo[116818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:57 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:50:57 compute-0 competent_colden[116763]: {
Oct 10 09:50:57 compute-0 competent_colden[116763]:     "0": [
Oct 10 09:50:57 compute-0 competent_colden[116763]:         {
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "devices": [
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "/dev/loop3"
Oct 10 09:50:57 compute-0 competent_colden[116763]:             ],
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "lv_name": "ceph_lv0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "lv_size": "21470642176",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "name": "ceph_lv0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "tags": {
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.cluster_name": "ceph",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.crush_device_class": "",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.encrypted": "0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.osd_id": "0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.type": "block",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.vdo": "0",
Oct 10 09:50:57 compute-0 competent_colden[116763]:                 "ceph.with_tpm": "0"
Oct 10 09:50:57 compute-0 competent_colden[116763]:             },
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "type": "block",
Oct 10 09:50:57 compute-0 competent_colden[116763]:             "vg_name": "ceph_vg0"
Oct 10 09:50:57 compute-0 competent_colden[116763]:         }
Oct 10 09:50:57 compute-0 competent_colden[116763]:     ]
Oct 10 09:50:57 compute-0 competent_colden[116763]: }
Oct 10 09:50:57 compute-0 systemd[1]: libpod-b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22.scope: Deactivated successfully.
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:57.383797582 +0000 UTC m=+0.455208241 container died b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:50:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:57] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:50:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:50:57] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c982cb3c6d82e8f1384e06b1c5f6910bbc5e02ccbb240b01e1e0549275ebe96-merged.mount: Deactivated successfully.
Oct 10 09:50:57 compute-0 python3.9[116820]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:57 compute-0 podman[116724]: 2025-10-10 09:50:57.439586389 +0000 UTC m=+0.510997008 container remove b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 09:50:57 compute-0 systemd[1]: libpod-conmon-b9a202098babd1b5258562aa35e8edce751eef28c4420792e039637843338b22.scope: Deactivated successfully.
Oct 10 09:50:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:57.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:57 compute-0 sudo[116580]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:57 compute-0 sudo[116840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:50:57 compute-0 sudo[116840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:57 compute-0 sudo[116840]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:57 compute-0 sudo[116865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:50:57 compute-0 sudo[116865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:57 compute-0 ceph-mon[73551]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.062103039 +0000 UTC m=+0.046479374 container create ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:50:58 compute-0 systemd[1]: Started libpod-conmon-ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3.scope.
Oct 10 09:50:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.039923007 +0000 UTC m=+0.024299382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.144567711 +0000 UTC m=+0.128944076 container init ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.155306221 +0000 UTC m=+0.139682546 container start ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.158475591 +0000 UTC m=+0.142851936 container attach ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:50:58 compute-0 nostalgic_mirzakhani[116947]: 167 167
Oct 10 09:50:58 compute-0 systemd[1]: libpod-ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3.scope: Deactivated successfully.
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.162183699 +0000 UTC m=+0.146560024 container died ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a04c0f2450b62afe1322f28415d70bc45777154adc237271a20488dbc40c75-merged.mount: Deactivated successfully.
Oct 10 09:50:58 compute-0 podman[116930]: 2025-10-10 09:50:58.195484564 +0000 UTC m=+0.179860889 container remove ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:50:58 compute-0 systemd[1]: libpod-conmon-ca3696f47f2e88942f7e5b99ae63f277d0b75f625e84b8eb95295303e1ea33c3.scope: Deactivated successfully.
Oct 10 09:50:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:50:58 compute-0 podman[116971]: 2025-10-10 09:50:58.375353432 +0000 UTC m=+0.057609586 container create 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:50:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:58 compute-0 systemd[1]: Started libpod-conmon-44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac.scope.
Oct 10 09:50:58 compute-0 podman[116971]: 2025-10-10 09:50:58.352761556 +0000 UTC m=+0.035017780 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:50:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa4dfb0a3bf028d801bc3dd7e9975e55d57bc743bf2c68fb4cb06c359ca1a87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa4dfb0a3bf028d801bc3dd7e9975e55d57bc743bf2c68fb4cb06c359ca1a87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa4dfb0a3bf028d801bc3dd7e9975e55d57bc743bf2c68fb4cb06c359ca1a87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa4dfb0a3bf028d801bc3dd7e9975e55d57bc743bf2c68fb4cb06c359ca1a87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:50:58 compute-0 podman[116971]: 2025-10-10 09:50:58.482229168 +0000 UTC m=+0.164485392 container init 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:50:58 compute-0 podman[116971]: 2025-10-10 09:50:58.489958062 +0000 UTC m=+0.172214206 container start 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:50:58 compute-0 podman[116971]: 2025-10-10 09:50:58.493766733 +0000 UTC m=+0.176022897 container attach 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:50:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:58 compute-0 sudo[116818]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:50:58 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:59 compute-0 lvm[117086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:50:59 compute-0 lvm[117086]: VG ceph_vg0 finished
Oct 10 09:50:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:59 compute-0 pedantic_tharp[116987]: {}
Oct 10 09:50:59 compute-0 systemd[1]: libpod-44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac.scope: Deactivated successfully.
Oct 10 09:50:59 compute-0 systemd[1]: libpod-44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac.scope: Consumed 1.248s CPU time.
Oct 10 09:50:59 compute-0 podman[116971]: 2025-10-10 09:50:59.265313913 +0000 UTC m=+0.947570057 container died 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:50:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa4dfb0a3bf028d801bc3dd7e9975e55d57bc743bf2c68fb4cb06c359ca1a87-merged.mount: Deactivated successfully.
Oct 10 09:50:59 compute-0 podman[116971]: 2025-10-10 09:50:59.330704495 +0000 UTC m=+1.012960639 container remove 44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tharp, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:50:59 compute-0 systemd[1]: libpod-conmon-44a1b2a706fe096290fe8b7c8220eda4fde36cba12d9a4d524558da0a0adc7ac.scope: Deactivated successfully.
Oct 10 09:50:59 compute-0 sudo[116865]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:50:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:50:59 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:50:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:50:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:59.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:50:59 compute-0 sudo[117101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:50:59 compute-0 sudo[117101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:59 compute-0 sudo[117101]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:59 compute-0 ceph-mon[73551]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:50:59 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-0 sudo[117251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrjzybdkmvqrslxsfsrmqqyqswclvbgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089859.5500293-116-183075493946644/AnsiballZ_setup.py'
Oct 10 09:50:59 compute-0 sudo[117251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:00 compute-0 python3.9[117253]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:51:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:00 compute-0 sudo[117251]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:00 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:51:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:01 compute-0 sudo[117448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocpvbgvymuxzepvbsknbnwkmavoqwbqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089860.8881502-149-125129216453490/AnsiballZ_file.py'
Oct 10 09:51:01 compute-0 sudo[117448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:01 compute-0 python3.9[117450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:01 compute-0 sudo[117448]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:01 compute-0 ceph-mon[73551]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:02 compute-0 sudo[117601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nziqzcwymcyzjzbfswyuvpngqbrdacma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089861.910543-173-5697816784555/AnsiballZ_command.py'
Oct 10 09:51:02 compute-0 sudo[117601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:02 compute-0 python3.9[117603]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:51:02 compute-0 sudo[117601]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:02 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:03 compute-0 sudo[117767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scmimhaegpucyexqzjqcsadbvgjyfvmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089862.9516904-197-155056632940976/AnsiballZ_stat.py'
Oct 10 09:51:03 compute-0 sudo[117767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:03.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:03 compute-0 python3.9[117769]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:03 compute-0 sudo[117767]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:03 compute-0 ceph-mon[73551]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:03 compute-0 sudo[117845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nybjhcfigiihwwmepiqthaqiovglrmzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089862.9516904-197-155056632940976/AnsiballZ_file.py'
Oct 10 09:51:03 compute-0 sudo[117845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095104 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:51:04 compute-0 python3.9[117847]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:04 compute-0 sudo[117845]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:04 compute-0 sudo[117998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdlnfsbaduafbchztwicqwbbvolitvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089864.4112244-233-126308588806628/AnsiballZ_stat.py'
Oct 10 09:51:04 compute-0 sudo[117998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:04 compute-0 python3.9[118000]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:04 compute-0 sudo[117998]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:04 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:05 compute-0 sudo[118077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibesbcgyynyaayynezeuoguepsruqwta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089864.4112244-233-126308588806628/AnsiballZ_file.py'
Oct 10 09:51:05 compute-0 sudo[118077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:05 compute-0 python3.9[118079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:05 compute-0 sudo[118077]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:05.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:05 compute-0 ceph-mon[73551]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:06 compute-0 sudo[118230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcuflhkkdassyovzimegothumiszgrlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089865.6793275-272-157795497042159/AnsiballZ_ini_file.py'
Oct 10 09:51:06 compute-0 sudo[118230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:06 compute-0 python3.9[118232]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:06 compute-0 sudo[118230]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:06 compute-0 sudo[118383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dftnxiszsydnixqdtlyzhyzqfqflfdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089866.469602-272-98501559372605/AnsiballZ_ini_file.py'
Oct 10 09:51:06 compute-0 sudo[118383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:06.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:51:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:06.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:06 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:07 compute-0 python3.9[118385]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:07 compute-0 sudo[118383]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:07] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 10 09:51:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:07] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 10 09:51:07 compute-0 sudo[118535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqgsbcypttlsypwsnpjaiuvfmdntitk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089867.1921237-272-199942080955796/AnsiballZ_ini_file.py'
Oct 10 09:51:07 compute-0 sudo[118535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:07.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:07 compute-0 python3.9[118537]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:07 compute-0 sudo[118535]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:07 compute-0 ceph-mon[73551]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:08 compute-0 sudo[118688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xasiyqqaoxgtxeehelqdkgouewmdeyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089867.8813407-272-3263585943048/AnsiballZ_ini_file.py'
Oct 10 09:51:08 compute-0 sudo[118688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:08 compute-0 python3.9[118690]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:08 compute-0 sudo[118688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64e00046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:08 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:09.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:09 compute-0 sudo[118841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgscaasdbytqdswsvmsresmnupdteih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089869.1043806-365-59387206084766/AnsiballZ_dnf.py'
Oct 10 09:51:09 compute-0 sudo[118841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:09.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:09 compute-0 python3.9[118843]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:51:09 compute-0 ceph-mon[73551]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:10 compute-0 sudo[118849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:10 compute-0 sudo[118849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:10 compute-0 sudo[118849]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:10 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:11 compute-0 sudo[118841]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:11.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095111 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:51:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:11.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:11 compute-0 ceph-mon[73551]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:11 compute-0 sudo[119023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phznmnwzhkwrpbslepmgikogtwhuoliz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089871.556846-398-74855031824293/AnsiballZ_setup.py'
Oct 10 09:51:11 compute-0 sudo[119023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:12 compute-0 python3.9[119025]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:51:12 compute-0 sudo[119023]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:12 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:13 compute-0 sudo[119179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baabrkmdwdwigopmvyrednwsdlxxeslz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089872.7234378-422-251682515593106/AnsiballZ_stat.py'
Oct 10 09:51:13 compute-0 sudo[119179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:13 compute-0 python3.9[119181]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:51:13 compute-0 sudo[119179]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:13 compute-0 ceph-mon[73551]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:13 compute-0 sudo[119331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpatvypeustdkjhavyjaereazzezqoit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089873.558975-449-142736156175999/AnsiballZ_stat.py'
Oct 10 09:51:13 compute-0 sudo[119331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:14 compute-0 python3.9[119333]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:51:14 compute-0 sudo[119331]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:14 compute-0 sudo[119485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosgsgplpcstinblhknlxrqobhlisssx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089874.5141966-479-160091111050028/AnsiballZ_service_facts.py'
Oct 10 09:51:14 compute-0 sudo[119485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:14 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:15 compute-0 python3.9[119487]: ansible-service_facts Invoked
Oct 10 09:51:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:15 compute-0 network[119504]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:51:15 compute-0 network[119505]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:51:15 compute-0 network[119506]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:51:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:15 compute-0 ceph-mon[73551]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:51:16
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'default.rgw.control', '.rgw.root', '.nfs', 'volumes', 'default.rgw.meta']
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:51:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:51:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:51:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:51:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:16.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:16 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:51:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:51:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:17 compute-0 ceph-mon[73551]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:18 compute-0 sudo[119485]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:18 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c40029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:19 compute-0 ceph-mon[73551]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:51:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:20 compute-0 sudo[119797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqthpenndkenmqnyelzncqzkzgjavpdf ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760089880.0041795-518-207023772372803/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760089880.0041795-518-207023772372803/args'
Oct 10 09:51:20 compute-0 sudo[119797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:20 compute-0 sudo[119797]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:20 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:21 compute-0 sudo[119965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwrsztqstgtutrofpsmwvhrohwqaaaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089880.8884647-551-109410716785113/AnsiballZ_dnf.py'
Oct 10 09:51:21 compute-0 sudo[119965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:21.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:21 compute-0 ceph-mon[73551]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:21 compute-0 python3.9[119967]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:51:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:21.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:22 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:22 compute-0 sudo[119965]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:23 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:23 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:51:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:23 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:23 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:23 compute-0 ceph-mon[73551]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:23 compute-0 sudo[120120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etmthnhbetytcfgadqvtfujuvwblahnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089883.2521822-590-140981453610482/AnsiballZ_package_facts.py'
Oct 10 09:51:23 compute-0 sudo[120120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:24 compute-0 python3.9[120122]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 09:51:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 09:51:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:24 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:24 compute-0 sudo[120120]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:24 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:25 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:25 compute-0 ceph-mon[73551]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 09:51:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:25 compute-0 sudo[120274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjqojuwlucuktubhouhpfjxvhnjsmme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089885.2207012-620-71829516304729/AnsiballZ_stat.py'
Oct 10 09:51:25 compute-0 sudo[120274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:25 compute-0 python3.9[120276]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:25 compute-0 sudo[120274]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:26 compute-0 sudo[120353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahezhowgaqtgceqoauqdfyezuisbymaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089885.2207012-620-71829516304729/AnsiballZ_file.py'
Oct 10 09:51:26 compute-0 sudo[120353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:26 compute-0 python3.9[120355]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:51:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:26 compute-0 sudo[120353]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:26 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:26.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:27 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:27 compute-0 sudo[120506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqcmpvapqqfgnpirncbctspobmunlgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089886.667945-656-67619261327891/AnsiballZ_stat.py'
Oct 10 09:51:27 compute-0 sudo[120506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:27 compute-0 python3.9[120508]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:27 compute-0 sudo[120506]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:51:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:51:27 compute-0 ceph-mon[73551]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:27 compute-0 sudo[120584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdplqzjhiptglydbnnsfmivwuwlatpib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089886.667945-656-67619261327891/AnsiballZ_file.py'
Oct 10 09:51:27 compute-0 sudo[120584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:27 compute-0 python3.9[120586]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:27 compute-0 sudo[120584]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:28 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:29 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:29 compute-0 sudo[120738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsddrwjdwkoxtasbnkamzgblqmlcgrbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089888.7287478-710-31792258934715/AnsiballZ_lineinfile.py'
Oct 10 09:51:29 compute-0 sudo[120738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:29 compute-0 ceph-mon[73551]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:29 compute-0 python3.9[120740]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:29 compute-0 sudo[120738]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 10 09:51:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 10 09:51:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:51:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:30 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:30 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:30 compute-0 sudo[120893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivduvvjykzobznyfvhmcfiuurcdsvln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089890.4972885-755-67782880723569/AnsiballZ_setup.py'
Oct 10 09:51:30 compute-0 sudo[120893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:30 compute-0 sudo[120896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:31 compute-0 sudo[120896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:31 compute-0 sudo[120896]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:31 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:31 compute-0 python3.9[120895]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:51:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:51:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:31 compute-0 ceph-mon[73551]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:51:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:31 compute-0 sudo[120893]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095131 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:51:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:31 compute-0 sudo[121002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovwruacfrspwtzxakclfcmdkrbrvthuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089890.4972885-755-67782880723569/AnsiballZ_systemd.py'
Oct 10 09:51:31 compute-0 sudo[121002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:51:32 compute-0 python3.9[121004]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:51:32 compute-0 sudo[121002]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:32 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:32 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:33 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:33 compute-0 sshd-session[115867]: Connection closed by 192.168.122.30 port 34092
Oct 10 09:51:33 compute-0 sshd-session[115864]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:51:33 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 10 09:51:33 compute-0 systemd[1]: session-43.scope: Consumed 25.580s CPU time.
Oct 10 09:51:33 compute-0 systemd-logind[806]: Session 43 logged out. Waiting for processes to exit.
Oct 10 09:51:33 compute-0 systemd-logind[806]: Removed session 43.
Oct 10 09:51:33 compute-0 ceph-mon[73551]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:51:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:33.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[107036]: 10/10/2025 09:51:34 : epoch 68e8d6a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64bc001090 fd 38 proxy ignored for local
Oct 10 09:51:34 compute-0 kernel: ganesha.nfsd[120843]: segfault at 50 ip 00007f659c87a32e sp 00007f65697f9210 error 4 in libntirpc.so.5.8[7f659c85f000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 10 09:51:34 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:51:34 compute-0 systemd[1]: Started Process Core Dump (PID 121034/UID 0).
Oct 10 09:51:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:35 compute-0 ceph-mon[73551]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 10 09:51:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 10 09:51:36 compute-0 systemd-coredump[121036]: Process 107040 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 65:
                                                    #0  0x00007f659c87a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 09:51:36 compute-0 systemd[1]: systemd-coredump@1-121034-0.service: Deactivated successfully.
Oct 10 09:51:36 compute-0 systemd[1]: systemd-coredump@1-121034-0.service: Consumed 1.370s CPU time.
Oct 10 09:51:36 compute-0 podman[121042]: 2025-10-10 09:51:36.278486065 +0000 UTC m=+0.034777289 container died 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:51:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:36 compute-0 systemd[91328]: Created slice User Background Tasks Slice.
Oct 10 09:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-081d1b79c5b3f6683d6ebd6edb5d462bad6fade4c25324854f35c1efe9aacfc0-merged.mount: Deactivated successfully.
Oct 10 09:51:36 compute-0 systemd[91328]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 09:51:36 compute-0 podman[121042]: 2025-10-10 09:51:36.325825829 +0000 UTC m=+0.082117043 container remove 863ebf4a1f83951f3d4630865d7466615c23d990994cb7df39a3b1f1a38ada00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 09:51:36 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:51:36 compute-0 systemd[91328]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 09:51:36 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 09:51:36 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.882s CPU time.
Oct 10 09:51:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:36.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:51:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:51:37 compute-0 ceph-mon[73551]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:37.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:51:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:38 compute-0 sshd-session[121089]: Accepted publickey for zuul from 192.168.122.30 port 41816 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:51:38 compute-0 systemd-logind[806]: New session 44 of user zuul.
Oct 10 09:51:38 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 10 09:51:39 compute-0 sshd-session[121089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:51:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:39 compute-0 ceph-mon[73551]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:51:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:39 compute-0 sudo[121242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudymfrqtoayqnnrxgozhoepbrniamjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089899.106476-26-183127104473468/AnsiballZ_file.py'
Oct 10 09:51:39 compute-0 sudo[121242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:39 compute-0 python3.9[121244]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:39 compute-0 sudo[121242]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:40 compute-0 sudo[121395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viixycnebsmceapcxbywblbcuaozyltz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089900.132039-62-228712420674144/AnsiballZ_stat.py'
Oct 10 09:51:40 compute-0 sudo[121395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095140 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:51:40 compute-0 python3.9[121397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:40 compute-0 sudo[121395]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:41 compute-0 sudo[121474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlwjkaocvfpblbdyfogjvbysvpzhdugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089900.132039-62-228712420674144/AnsiballZ_file.py'
Oct 10 09:51:41 compute-0 sudo[121474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:41 compute-0 python3.9[121476]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:41 compute-0 sudo[121474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:41 compute-0 ceph-mon[73551]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:41 compute-0 sshd-session[121092]: Connection closed by 192.168.122.30 port 41816
Oct 10 09:51:41 compute-0 sshd-session[121089]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:51:41 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 10 09:51:41 compute-0 systemd[1]: session-44.scope: Consumed 1.851s CPU time.
Oct 10 09:51:41 compute-0 systemd-logind[806]: Session 44 logged out. Waiting for processes to exit.
Oct 10 09:51:41 compute-0 systemd-logind[806]: Removed session 44.
Oct 10 09:51:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:43 compute-0 ceph-mon[73551]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:45 compute-0 ceph-mon[73551]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:51:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:51:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:51:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:46 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 2.
Oct 10 09:51:46 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:51:46 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.882s CPU time.
Oct 10 09:51:46 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:51:46 compute-0 sshd-session[121508]: Accepted publickey for zuul from 192.168.122.30 port 37504 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:51:46 compute-0 systemd-logind[806]: New session 45 of user zuul.
Oct 10 09:51:46 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 10 09:51:46 compute-0 sshd-session[121508]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:51:46 compute-0 podman[121583]: 2025-10-10 09:51:46.927599549 +0000 UTC m=+0.056697537 container create 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:51:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2aec44a996ebfac5b1b7cc8275f29564a9fa6dc8f4f3ea4c2429596e75e075/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2aec44a996ebfac5b1b7cc8275f29564a9fa6dc8f4f3ea4c2429596e75e075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2aec44a996ebfac5b1b7cc8275f29564a9fa6dc8f4f3ea4c2429596e75e075/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2aec44a996ebfac5b1b7cc8275f29564a9fa6dc8f4f3ea4c2429596e75e075/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:46 compute-0 podman[121583]: 2025-10-10 09:51:46.899155768 +0000 UTC m=+0.028253776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:51:46 compute-0 podman[121583]: 2025-10-10 09:51:46.993034198 +0000 UTC m=+0.122132226 container init 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:51:46 compute-0 podman[121583]: 2025-10-10 09:51:46.998670565 +0000 UTC m=+0.127768563 container start 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:51:47 compute-0 bash[121583]: 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2
Oct 10 09:51:47 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:51:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:51:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:51:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:47.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:47 compute-0 ceph-mon[73551]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:47 compute-0 python3.9[121765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:51:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:48 compute-0 sudo[121921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnyjtmtiiyahzoecoerrfgfqemhgdfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089908.3942378-59-217259280294275/AnsiballZ_file.py'
Oct 10 09:51:48 compute-0 sudo[121921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:49 compute-0 python3.9[121923]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:49 compute-0 sudo[121921]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:49.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:49.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:49 compute-0 ceph-mon[73551]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:49 compute-0 sudo[122096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yniiatlubxavxtetbvpgnsienyjvtyaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089909.3539782-83-158456493172391/AnsiballZ_stat.py'
Oct 10 09:51:49 compute-0 sudo[122096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:50 compute-0 python3.9[122098]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:50 compute-0 sudo[122096]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:50 compute-0 sudo[122175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdweoznerxqwwgrrwgldrrepbfukmtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089909.3539782-83-158456493172391/AnsiballZ_file.py'
Oct 10 09:51:50 compute-0 sudo[122175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:50 compute-0 python3.9[122177]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.4ofi9ml7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:50 compute-0 sudo[122175]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-0 sudo[122203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:51 compute-0 sudo[122203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:51 compute-0 sudo[122203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:51 compute-0 sudo[122353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdyyqqacemciqwmlmfvdumqvqiullii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089911.2036994-143-97960564594179/AnsiballZ_stat.py'
Oct 10 09:51:51 compute-0 sudo[122353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:51 compute-0 ceph-mon[73551]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:51 compute-0 python3.9[122355]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:51 compute-0 sudo[122353]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-0 sudo[122431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbfirrkegqzwpoyilmtgidjyqooejedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089911.2036994-143-97960564594179/AnsiballZ_file.py'
Oct 10 09:51:51 compute-0 sudo[122431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:52 compute-0 python3.9[122433]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.8jz3ufcs recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:52 compute-0 sudo[122431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:52 compute-0 sudo[122585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcikxviwabcfuihzazdipboxsvzuwbaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089912.6301496-182-205329463882966/AnsiballZ_file.py'
Oct 10 09:51:52 compute-0 sudo[122585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:53 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:51:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:53 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:53 compute-0 python3.9[122587]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:53 compute-0 sudo[122585]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:53.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:53 compute-0 ceph-mon[73551]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:53 compute-0 sudo[122737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewhbdajouxnjoyqxnivmhkybthbdmwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089913.406115-206-268942870558171/AnsiballZ_stat.py'
Oct 10 09:51:53 compute-0 sudo[122737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:53 compute-0 python3.9[122739]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:53 compute-0 sudo[122737]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:54 compute-0 sudo[122816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzmrpatyyviquktarnjfeizyvgiwkxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089913.406115-206-268942870558171/AnsiballZ_file.py'
Oct 10 09:51:54 compute-0 sudo[122816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:54 compute-0 python3.9[122818]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:54 compute-0 sudo[122816]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:54 compute-0 sudo[122969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szvkrmieldntdxihyrfnmmyquthkswut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089914.6258922-206-150838214547524/AnsiballZ_stat.py'
Oct 10 09:51:54 compute-0 sudo[122969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:55 compute-0 python3.9[122971]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:55 compute-0 sudo[122969]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:55 compute-0 sudo[123047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuwkriclxludrkdnzpvyvvmquczyswor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089914.6258922-206-150838214547524/AnsiballZ_file.py'
Oct 10 09:51:55 compute-0 sudo[123047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:55 compute-0 python3.9[123049]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:55 compute-0 sudo[123047]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:55 compute-0 ceph-mon[73551]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:56 compute-0 sudo[123200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-necmsqsvjhrpoisykqwtuagrbnzuodle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089915.993147-275-196144085980624/AnsiballZ_file.py'
Oct 10 09:51:56 compute-0 sudo[123200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:56 compute-0 python3.9[123202]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:56 compute-0 sudo[123200]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:51:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:51:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:51:57 compute-0 sudo[123353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctulqfbwaciopezwcekdguxihkhcatnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089916.7976172-299-64417005028014/AnsiballZ_stat.py'
Oct 10 09:51:57 compute-0 sudo[123353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:51:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:51:57 compute-0 python3.9[123355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:57 compute-0 sudo[123353]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:51:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:51:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:51:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:51:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:57.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:51:57 compute-0 sudo[123431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfissvofsjczxzfppmfqavyulfdznwbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089916.7976172-299-64417005028014/AnsiballZ_file.py'
Oct 10 09:51:57 compute-0 sudo[123431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:57 compute-0 ceph-mon[73551]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:57 compute-0 python3.9[123433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:57 compute-0 sudo[123431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:58 compute-0 sudo[123584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dutkdvkcokbcaigogosjdbaryaqucgdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089918.1271846-335-201440844976091/AnsiballZ_stat.py'
Oct 10 09:51:58 compute-0 sudo[123584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:58 compute-0 python3.9[123586]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:58 compute-0 sudo[123584]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:58 compute-0 sudo[123663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzmnrclzfgzvwpoqjgskbwhjigggckkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089918.1271846-335-201440844976091/AnsiballZ_file.py'
Oct 10 09:51:58 compute-0 sudo[123663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:59 compute-0 python3.9[123665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:51:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:51:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:51:59 compute-0 sudo[123663]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:51:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:51:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:59.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:51:59 compute-0 ceph-mon[73551]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:59 compute-0 sudo[123753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:51:59 compute-0 sudo[123753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:59 compute-0 sudo[123753]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:59 compute-0 sudo[123779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:51:59 compute-0 sudo[123779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:00 compute-0 sudo[123877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfaxeybjdlqsjchdylnwzsqemapelnvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089919.4510357-371-7143848324862/AnsiballZ_systemd.py'
Oct 10 09:52:00 compute-0 sudo[123877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:52:00 compute-0 python3.9[123879]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:52:00 compute-0 systemd[1]: Reloading.
Oct 10 09:52:00 compute-0 sudo[123779]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:00 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:00 compute-0 systemd-rc-local-generator[123942]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:52:00 compute-0 systemd-sysv-generator[123946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:52:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-0 sudo[123950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:52:00 compute-0 sudo[123950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:00 compute-0 sudo[123950]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:00 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:00 compute-0 sudo[123977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:52:00 compute-0 sudo[123977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:00 compute-0 sudo[123877]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:01 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.233608483 +0000 UTC m=+0.045000780 container create 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 09:52:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:01 compute-0 systemd[1]: Started libpod-conmon-55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d.scope.
Oct 10 09:52:01 compute-0 sudo[124206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrdarvbrxofpndqeadyyvjeyrzdzadqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089920.9571843-395-24232210137625/AnsiballZ_stat.py'
Oct 10 09:52:01 compute-0 sudo[124206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:52:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.21370939 +0000 UTC m=+0.025101727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.318036647 +0000 UTC m=+0.129428974 container init 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.326149161 +0000 UTC m=+0.137541458 container start 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.329757875 +0000 UTC m=+0.141150172 container attach 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:52:01 compute-0 clever_leakey[124210]: 167 167
Oct 10 09:52:01 compute-0 systemd[1]: libpod-55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d.scope: Deactivated successfully.
Oct 10 09:52:01 compute-0 conmon[124210]: conmon 55d16b9eec7100cf8af2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d.scope/container/memory.events
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.335640898 +0000 UTC m=+0.147033195 container died 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f6e3ea2c4256fcb956bdbb334fceb52e7825c446f7df479b0693bb21391eed8-merged.mount: Deactivated successfully.
Oct 10 09:52:01 compute-0 podman[124165]: 2025-10-10 09:52:01.388525485 +0000 UTC m=+0.199917792 container remove 55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:52:01 compute-0 systemd[1]: libpod-conmon-55d16b9eec7100cf8af2e9576b5dd0275c063185f28c62dea7e066bfb56b442d.scope: Deactivated successfully.
Oct 10 09:52:01 compute-0 python3.9[124211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:01 compute-0 sudo[124206]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 09:52:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 09:52:01 compute-0 podman[124236]: 2025-10-10 09:52:01.560411387 +0000 UTC m=+0.050667108 container create ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:01 compute-0 systemd[1]: Started libpod-conmon-ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e.scope.
Oct 10 09:52:01 compute-0 podman[124236]: 2025-10-10 09:52:01.537178419 +0000 UTC m=+0.027434120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:01 compute-0 podman[124236]: 2025-10-10 09:52:01.67165233 +0000 UTC m=+0.161908021 container init ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:52:01 compute-0 podman[124236]: 2025-10-10 09:52:01.680510408 +0000 UTC m=+0.170766089 container start ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:01 compute-0 podman[124236]: 2025-10-10 09:52:01.683880704 +0000 UTC m=+0.174136385 container attach ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:52:01 compute-0 ceph-mon[73551]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:52:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:01 compute-0 sudo[124332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibsdqxqleiwbtobjpateizikvmrybycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089920.9571843-395-24232210137625/AnsiballZ_file.py'
Oct 10 09:52:01 compute-0 sudo[124332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:02 compute-0 nervous_pasteur[124261]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:52:02 compute-0 nervous_pasteur[124261]: --> All data devices are unavailable
Oct 10 09:52:02 compute-0 systemd[1]: libpod-ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e.scope: Deactivated successfully.
Oct 10 09:52:02 compute-0 podman[124236]: 2025-10-10 09:52:02.066849356 +0000 UTC m=+0.557105057 container died ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:52:02 compute-0 python3.9[124334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-327e8a6bd10e78da8d0c5d1d8babcc7cce73ce34e87b25dcdd3cd9452412ac47-merged.mount: Deactivated successfully.
Oct 10 09:52:02 compute-0 sudo[124332]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:02 compute-0 podman[124236]: 2025-10-10 09:52:02.130889541 +0000 UTC m=+0.621145232 container remove ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:52:02 compute-0 systemd[1]: libpod-conmon-ec5c089d21ee29ca3b626012bd9f08d9100acb7693a6e6078779c5309d45bb7e.scope: Deactivated successfully.
Oct 10 09:52:02 compute-0 sudo[123977]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:02 compute-0 sudo[124380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:52:02 compute-0 sudo[124380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:02 compute-0 sudo[124380]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:52:02 compute-0 sudo[124418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:52:02 compute-0 sudo[124418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:02 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:02 compute-0 sudo[124569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lauxfjehtvdzwktdondbulawtixxoizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089922.2834218-431-12678487892380/AnsiballZ_stat.py'
Oct 10 09:52:02 compute-0 sudo[124569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:02 compute-0 podman[124601]: 2025-10-10 09:52:02.718580745 +0000 UTC m=+0.043144762 container create 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 10 09:52:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095202 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:52:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:02 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:02 compute-0 python3.9[124578]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:02 compute-0 systemd[1]: Started libpod-conmon-4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088.scope.
Oct 10 09:52:02 compute-0 podman[124601]: 2025-10-10 09:52:02.697933289 +0000 UTC m=+0.022497316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:02 compute-0 sudo[124569]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:02 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:02 compute-0 podman[124601]: 2025-10-10 09:52:02.825652288 +0000 UTC m=+0.150216305 container init 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:52:02 compute-0 podman[124601]: 2025-10-10 09:52:02.833669479 +0000 UTC m=+0.158233496 container start 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:52:02 compute-0 podman[124601]: 2025-10-10 09:52:02.836677133 +0000 UTC m=+0.161241150 container attach 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 09:52:02 compute-0 heuristic_swirles[124619]: 167 167
Oct 10 09:52:02 compute-0 systemd[1]: libpod-4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088.scope: Deactivated successfully.
Oct 10 09:52:02 compute-0 podman[124626]: 2025-10-10 09:52:02.911125285 +0000 UTC m=+0.042386608 container died 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-84e8bbb5219c101867c0430544a4165c21e97feeafb0a0b882eab9023b1c698c-merged.mount: Deactivated successfully.
Oct 10 09:52:02 compute-0 podman[124626]: 2025-10-10 09:52:02.949838627 +0000 UTC m=+0.081099880 container remove 4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:52:02 compute-0 systemd[1]: libpod-conmon-4e6366c220d8a31e47f2bbdb68acc2c8f0e95207bf7325668d3993c46a631088.scope: Deactivated successfully.
Oct 10 09:52:03 compute-0 sudo[124716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxlkzmboqdqnlbiveukufrsteukmaraa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089922.2834218-431-12678487892380/AnsiballZ_file.py'
Oct 10 09:52:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:03 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:03 compute-0 sudo[124716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.136005637 +0000 UTC m=+0.053534778 container create 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:03 compute-0 systemd[1]: Started libpod-conmon-00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985.scope.
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.113622026 +0000 UTC m=+0.031151147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:03 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfaeb66347446311bdcbd01fff5eea6d113d38f1d65bc9401655ba15c2a4d727/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfaeb66347446311bdcbd01fff5eea6d113d38f1d65bc9401655ba15c2a4d727/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfaeb66347446311bdcbd01fff5eea6d113d38f1d65bc9401655ba15c2a4d727/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfaeb66347446311bdcbd01fff5eea6d113d38f1d65bc9401655ba15c2a4d727/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.234723088 +0000 UTC m=+0.152252219 container init 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.244205155 +0000 UTC m=+0.161734256 container start 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.247815238 +0000 UTC m=+0.165344449 container attach 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Oct 10 09:52:03 compute-0 python3.9[124718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:03 compute-0 sudo[124716]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]: {
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:     "0": [
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:         {
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "devices": [
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "/dev/loop3"
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             ],
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "lv_name": "ceph_lv0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "lv_size": "21470642176",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "name": "ceph_lv0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "tags": {
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.cluster_name": "ceph",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.crush_device_class": "",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.encrypted": "0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.osd_id": "0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.type": "block",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.vdo": "0",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:                 "ceph.with_tpm": "0"
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             },
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "type": "block",
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:             "vg_name": "ceph_vg0"
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:         }
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]:     ]
Oct 10 09:52:03 compute-0 upbeat_ritchie[124741]: }
Oct 10 09:52:03 compute-0 systemd[1]: libpod-00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985.scope: Deactivated successfully.
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.60872663 +0000 UTC m=+0.526255781 container died 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfaeb66347446311bdcbd01fff5eea6d113d38f1d65bc9401655ba15c2a4d727-merged.mount: Deactivated successfully.
Oct 10 09:52:03 compute-0 podman[124724]: 2025-10-10 09:52:03.655501535 +0000 UTC m=+0.573030636 container remove 00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ritchie, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:52:03 compute-0 systemd[1]: libpod-conmon-00f188ad3d91babc7b535ba32dd7e1d875c4b9c8daa7b7fae6601d421daca985.scope: Deactivated successfully.
Oct 10 09:52:03 compute-0 sudo[124418]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:03 compute-0 ceph-mon[73551]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:52:03 compute-0 sudo[124887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:52:03 compute-0 sudo[124930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqcvsxxqpqiewskcknxgkxenldsosxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089923.478561-467-159337819599582/AnsiballZ_systemd.py'
Oct 10 09:52:03 compute-0 sudo[124887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:03 compute-0 sudo[124930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:03 compute-0 sudo[124887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:03 compute-0 sudo[124937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:52:03 compute-0 sudo[124937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:04 compute-0 python3.9[124936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:52:04 compute-0 systemd[1]: Reloading.
Oct 10 09:52:04 compute-0 systemd-rc-local-generator[125029]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:52:04 compute-0 systemd-sysv-generator[125033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.292223324 +0000 UTC m=+0.053420764 container create 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:52:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.27102551 +0000 UTC m=+0.032223010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:04 compute-0 systemd[1]: Started libpod-conmon-72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17.scope.
Oct 10 09:52:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:04 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.490781971 +0000 UTC m=+0.251979461 container init 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.501666323 +0000 UTC m=+0.262863783 container start 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.505433711 +0000 UTC m=+0.266631201 container attach 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:52:04 compute-0 wizardly_spence[125056]: 167 167
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.509471617 +0000 UTC m=+0.270669077 container died 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:04 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 09:52:04 compute-0 systemd[1]: libpod-72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17.scope: Deactivated successfully.
Oct 10 09:52:04 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:52:04 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:52:04 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 09:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-378f68aaa082f29a0fdce4deaa6d2d8d234d16c1fd513e23cef538353512f178-merged.mount: Deactivated successfully.
Oct 10 09:52:04 compute-0 podman[125039]: 2025-10-10 09:52:04.55556553 +0000 UTC m=+0.316762970 container remove 72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:52:04 compute-0 systemd[1]: libpod-conmon-72c025d9aae2fb95ffe40b48aba8d3350c13cb74b840c2cac6ccc2da76cddf17.scope: Deactivated successfully.
Oct 10 09:52:04 compute-0 sudo[124930]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:04 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:04 compute-0 podman[125108]: 2025-10-10 09:52:04.762293054 +0000 UTC m=+0.068593189 container create f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:52:04 compute-0 systemd[1]: Started libpod-conmon-f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc.scope.
Oct 10 09:52:04 compute-0 podman[125108]: 2025-10-10 09:52:04.73278313 +0000 UTC m=+0.039083305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:52:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01def4aa36e1ab6dce9161b7c0af5a9e6d495ed043b6f57cb325f05a44ee9d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01def4aa36e1ab6dce9161b7c0af5a9e6d495ed043b6f57cb325f05a44ee9d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01def4aa36e1ab6dce9161b7c0af5a9e6d495ed043b6f57cb325f05a44ee9d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f01def4aa36e1ab6dce9161b7c0af5a9e6d495ed043b6f57cb325f05a44ee9d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:52:04 compute-0 podman[125108]: 2025-10-10 09:52:04.859586231 +0000 UTC m=+0.165886346 container init f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:52:04 compute-0 podman[125108]: 2025-10-10 09:52:04.870470932 +0000 UTC m=+0.176771027 container start f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:52:04 compute-0 podman[125108]: 2025-10-10 09:52:04.874969683 +0000 UTC m=+0.181269788 container attach f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:52:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:05 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:52:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:52:05 compute-0 python3.9[125290]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:52:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:05 compute-0 network[125342]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:52:05 compute-0 network[125343]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:52:05 compute-0 network[125344]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:52:05 compute-0 lvm[125341]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:52:05 compute-0 lvm[125341]: VG ceph_vg0 finished
Oct 10 09:52:05 compute-0 distracted_bassi[125148]: {}
Oct 10 09:52:05 compute-0 podman[125108]: 2025-10-10 09:52:05.729359938 +0000 UTC m=+1.035660053 container died f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:52:05 compute-0 ceph-mon[73551]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:06 compute-0 systemd[1]: libpod-f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc.scope: Deactivated successfully.
Oct 10 09:52:06 compute-0 systemd[1]: libpod-f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc.scope: Consumed 1.281s CPU time.
Oct 10 09:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f01def4aa36e1ab6dce9161b7c0af5a9e6d495ed043b6f57cb325f05a44ee9d3-merged.mount: Deactivated successfully.
Oct 10 09:52:06 compute-0 podman[125108]: 2025-10-10 09:52:06.330591625 +0000 UTC m=+1.636891760 container remove f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:52:06 compute-0 systemd[1]: libpod-conmon-f296262715b94bf3d101d97a656fe3bc996fb870212340faef8d5efb432edebc.scope: Deactivated successfully.
Oct 10 09:52:06 compute-0 sudo[124937]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:52:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:52:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:06 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:06 compute-0 sudo[125373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:52:06 compute-0 sudo[125373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:06 compute-0 sudo[125373]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:06 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:52:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:52:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:06.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:07 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:07] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:52:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:07] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:52:07 compute-0 ceph-mon[73551]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:52:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:52:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:52:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:08 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:08 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:09 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4001110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:09.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:09 compute-0 ceph-mon[73551]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:52:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:10 compute-0 sudo[125652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnkajxedamfkubwmjhakofntxuwyhlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089930.0558496-545-278816840325437/AnsiballZ_stat.py'
Oct 10 09:52:10 compute-0 sudo[125652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:10 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:10 compute-0 python3.9[125654]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:10 compute-0 sudo[125652]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:10 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:10 compute-0 sudo[125731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ituwnqilytbilgerrqxfwgkkqemchrtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089930.0558496-545-278816840325437/AnsiballZ_file.py'
Oct 10 09:52:11 compute-0 sudo[125731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:11 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:11 compute-0 sudo[125734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:11 compute-0 sudo[125734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:11 compute-0 sudo[125734]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:11 compute-0 python3.9[125733]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:11 compute-0 sudo[125731]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:11 compute-0 ceph-mon[73551]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:52:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:52:11 compute-0 sudo[125908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoahwyzamuzltaxdcsxuwksumzcsowvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089931.4653506-584-63644889969582/AnsiballZ_file.py'
Oct 10 09:52:11 compute-0 sudo[125908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:12 compute-0 python3.9[125910]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:12 compute-0 sudo[125908]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:12 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4001eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:12 compute-0 sudo[126061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpuubkvpatmhgfqcrrapfrrtmhxjrnqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089932.2998848-608-92895595222523/AnsiballZ_stat.py'
Oct 10 09:52:12 compute-0 sudo[126061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:12 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:12 compute-0 python3.9[126063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:12 compute-0 sudo[126061]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:13 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:13 compute-0 sudo[126140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaovbghtjatxfezwrerdpqmzzhavciwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089932.2998848-608-92895595222523/AnsiballZ_file.py'
Oct 10 09:52:13 compute-0 sudo[126140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:13.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:13 compute-0 python3.9[126142]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:13 compute-0 sudo[126140]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:13 compute-0 ceph-mon[73551]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:13.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:14 compute-0 sudo[126293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfbnbghsjoigvqjafkzvkuibtprcwbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089933.7581224-653-35351261767951/AnsiballZ_timezone.py'
Oct 10 09:52:14 compute-0 sudo[126293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:14 compute-0 python3.9[126295]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:52:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:14 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:14 compute-0 systemd[1]: Starting Time & Date Service...
Oct 10 09:52:14 compute-0 systemd[1]: Started Time & Date Service.
Oct 10 09:52:14 compute-0 sudo[126293]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:14 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:15 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:15 compute-0 sudo[126450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmxqooczdjiucihwzlwdwpweqoopkaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.0833087-680-262399937874814/AnsiballZ_file.py'
Oct 10 09:52:15 compute-0 sudo[126450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:15 compute-0 ceph-mon[73551]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:15 compute-0 python3.9[126452]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:15 compute-0 sudo[126450]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:52:16
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', '.nfs', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control']
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:52:16 compute-0 sudo[126603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvshqdqdqpsdtwkwttbfjzoeneyvoogs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.9338586-704-139713095122188/AnsiballZ_stat.py'
Oct 10 09:52:16 compute-0 sudo[126603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:52:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:52:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:52:16 compute-0 python3.9[126605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:16 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:16 compute-0 sudo[126603]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:16 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:16 compute-0 sudo[126682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqnvdkgnspcoafpkspnukhekbssbmudy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.9338586-704-139713095122188/AnsiballZ_file.py'
Oct 10 09:52:16 compute-0 sudo[126682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:16.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:52:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:16.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:52:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:17 compute-0 python3.9[126684]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:17 compute-0 sudo[126682]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:17 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:17.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:52:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:52:17 compute-0 sudo[126834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcyeosthnhipoexticcahrqpgwgzorfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089937.224649-740-91282425474498/AnsiballZ_stat.py'
Oct 10 09:52:17 compute-0 sudo[126834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:17 compute-0 ceph-mon[73551]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:17 compute-0 python3.9[126836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:17 compute-0 sudo[126834]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:18 compute-0 sudo[126912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cylypuedmpjrhrsfwkvkdcdzjbsnvvxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089937.224649-740-91282425474498/AnsiballZ_file.py'
Oct 10 09:52:18 compute-0 sudo[126912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:18 compute-0 python3.9[126914]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r8izbtsu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:18 compute-0 sudo[126912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:18 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:18 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:18 compute-0 sudo[127066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-venrjnwtomevgcjntdbsehpvijtoejhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089938.5785758-776-200611453480742/AnsiballZ_stat.py'
Oct 10 09:52:18 compute-0 sudo[127066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:19 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:19 compute-0 python3.9[127068]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:19 compute-0 sudo[127066]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:19 compute-0 sudo[127144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbqdwtpiieljubgwwzojrxhynhigpwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089938.5785758-776-200611453480742/AnsiballZ_file.py'
Oct 10 09:52:19 compute-0 sudo[127144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:19 compute-0 python3.9[127146]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:19 compute-0 sudo[127144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:19 compute-0 ceph-mon[73551]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:20 compute-0 sudo[127297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiccsgillsfwiarcyxikmduzvhvnblqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089940.010616-815-150037212630522/AnsiballZ_command.py'
Oct 10 09:52:20 compute-0 sudo[127297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:20 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:20 compute-0 python3.9[127299]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:20 compute-0 sudo[127297]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:20 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:21 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:21 compute-0 sudo[127451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynealevfctnclgizpjdwdoarocdrphs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089940.9830391-839-22468398427485/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:52:21 compute-0 sudo[127451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:52:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:52:21 compute-0 python3[127453]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:52:21 compute-0 sudo[127451]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:21 compute-0 ceph-mon[73551]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:22 compute-0 sudo[127604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjzatnriabtsvhhrflfksqrhpmqtmnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089941.9249156-863-39074449052258/AnsiballZ_stat.py'
Oct 10 09:52:22 compute-0 sudo[127604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:22 compute-0 python3.9[127606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:22 compute-0 sudo[127604]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:22 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:22 compute-0 sudo[127682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdefuwxgpmsdviejftdehfxpzqsbucrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089941.9249156-863-39074449052258/AnsiballZ_file.py'
Oct 10 09:52:22 compute-0 sudo[127682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:22 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:22 compute-0 python3.9[127684]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:22 compute-0 sudo[127682]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:23 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:23.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:23 compute-0 sudo[127835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqachadgbkictiecydeyakquiyfolhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089943.3016365-899-181978089528637/AnsiballZ_stat.py'
Oct 10 09:52:23 compute-0 sudo[127835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:23 compute-0 ceph-mon[73551]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:23 compute-0 python3.9[127837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:23 compute-0 sudo[127835]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:24 compute-0 sudo[127914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegpsufqlfdycgntkjxqgxsfylcpvqgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089943.3016365-899-181978089528637/AnsiballZ_file.py'
Oct 10 09:52:24 compute-0 sudo[127914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:24 compute-0 python3.9[127916]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:24 compute-0 sudo[127914]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:24 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:24 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:24 compute-0 sudo[128067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgntbrrnelynmaainhfiagemuvasrykc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089944.6992576-935-144711485570469/AnsiballZ_stat.py'
Oct 10 09:52:25 compute-0 sudo[128067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:25 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:25 compute-0 python3.9[128069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:25 compute-0 sudo[128067]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:25.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:25 compute-0 sudo[128145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmqpsthmzqyfucbiivxzsvncxviadkoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089944.6992576-935-144711485570469/AnsiballZ_file.py'
Oct 10 09:52:25 compute-0 sudo[128145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:25 compute-0 python3.9[128147]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:25 compute-0 ceph-mon[73551]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:25 compute-0 sudo[128145]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:26 compute-0 sudo[128298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fimvmngecwxqenfmpoorjigmckniigva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089946.0789602-971-259895942968577/AnsiballZ_stat.py'
Oct 10 09:52:26 compute-0 sudo[128298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:26 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:26 compute-0 python3.9[128300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:26 compute-0 sudo[128298]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:26 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:26 compute-0 sudo[128377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwmucmogpydlpsiezftrzailkvcwslkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089946.0789602-971-259895942968577/AnsiballZ_file.py'
Oct 10 09:52:26 compute-0 sudo[128377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:52:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:27 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:27 compute-0 python3.9[128379]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:27 compute-0 sudo[128377]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:27.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:52:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:52:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:27 compute-0 ceph-mon[73551]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:27 compute-0 sudo[128529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmhgiishzlbpulgsqiiyrkdsyztprjbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089947.44156-1007-188591512943438/AnsiballZ_stat.py'
Oct 10 09:52:27 compute-0 sudo[128529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:28 compute-0 python3.9[128531]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:28 compute-0 sudo[128529]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:28 compute-0 sudo[128608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhkdckwopkehzbykumtunjzavutcpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089947.44156-1007-188591512943438/AnsiballZ_file.py'
Oct 10 09:52:28 compute-0 sudo[128608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:28 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:28 compute-0 python3.9[128610]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:28 compute-0 sudo[128608]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:28 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:29 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:29.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:29 compute-0 sudo[128761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgsglkkedickrvqrnctlepblpegefntk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089949.0481172-1046-88156385401138/AnsiballZ_command.py'
Oct 10 09:52:29 compute-0 sudo[128761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:29 compute-0 python3.9[128763]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:29.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:29 compute-0 sudo[128761]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:29 compute-0 ceph-mon[73551]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:30 compute-0 sudo[128917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlzktykraonlsfgurtbwkkpilsbliduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089949.8962078-1070-8644269902427/AnsiballZ_blockinfile.py'
Oct 10 09:52:30 compute-0 sudo[128917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:30 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:30 compute-0 python3.9[128919]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:30 compute-0 sudo[128917]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:30 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:52:30 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:52:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:30 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:31 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:31 compute-0 sudo[129071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibucjkdzwdgrybgvonargdxrlsbpbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089950.8911293-1097-80534137351767/AnsiballZ_file.py'
Oct 10 09:52:31 compute-0 sudo[129071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:31 compute-0 sudo[129072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:31 compute-0 sudo[129072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:31 compute-0 sudo[129072]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:52:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:31.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:31 compute-0 python3.9[129079]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:31 compute-0 sudo[129071]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:31.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:31 compute-0 ceph-mon[73551]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:31 compute-0 sudo[129248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsrsddyvywtsmamuvvuaxskyrinevrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089951.6120496-1097-80807503296689/AnsiballZ_file.py'
Oct 10 09:52:31 compute-0 sudo[129248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:32 compute-0 python3.9[129250]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:32 compute-0 sudo[129248]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:32 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:32 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:32 compute-0 sudo[129403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttijmdxebmtbluinqbprkaxtwtdlojxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089952.4325402-1142-211241042305081/AnsiballZ_mount.py'
Oct 10 09:52:32 compute-0 sudo[129403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:33 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:33 compute-0 python3.9[129405]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:52:33 compute-0 sudo[129403]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:33 compute-0 sudo[129555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygpklmdvdazgbknozsvybmwgncukyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089953.394808-1142-230697299732531/AnsiballZ_mount.py'
Oct 10 09:52:33 compute-0 sudo[129555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:33 compute-0 ceph-mon[73551]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:33 compute-0 python3.9[129557]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:52:34 compute-0 sudo[129555]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:34 compute-0 sshd-session[121532]: Connection closed by 192.168.122.30 port 37504
Oct 10 09:52:34 compute-0 sshd-session[121508]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:52:34 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 10 09:52:34 compute-0 systemd[1]: session-45.scope: Consumed 33.408s CPU time.
Oct 10 09:52:34 compute-0 systemd-logind[806]: Session 45 logged out. Waiting for processes to exit.
Oct 10 09:52:34 compute-0 systemd-logind[806]: Removed session 45.
Oct 10 09:52:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:34 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:34 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:35 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:35 compute-0 ceph-mon[73551]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:36 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:36 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:36.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:37 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:37] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:52:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:37] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:52:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:37 compute-0 ceph-mon[73551]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:38 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:38 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00e80021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:39 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:39.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:39 compute-0 ceph-mon[73551]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:40 compute-0 sshd-session[129590]: Accepted publickey for zuul from 192.168.122.30 port 46982 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:52:40 compute-0 systemd-logind[806]: New session 46 of user zuul.
Oct 10 09:52:40 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 10 09:52:40 compute-0 sshd-session[129590]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:52:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:40 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:40 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4001370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:40 compute-0 sudo[129744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxrphcswkvhihuvtdomnzrebxfjacfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089960.334429-18-146445835083008/AnsiballZ_tempfile.py'
Oct 10 09:52:40 compute-0 sudo[129744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:41 compute-0 python3.9[129746]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 09:52:41 compute-0 sudo[129744]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:41 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:41.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:41 compute-0 sudo[129896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlwgmerozgaefveglwijoafwontycdjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089961.2742634-54-257037262867763/AnsiballZ_stat.py'
Oct 10 09:52:41 compute-0 sudo[129896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:41 compute-0 ceph-mon[73551]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:41 compute-0 python3.9[129898]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:52:41 compute-0 sudo[129896]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:42 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:42 compute-0 sudo[130051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvtlulhwecxhcwygxouteevgtisufhxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089962.2058072-78-218870757457529/AnsiballZ_slurp.py'
Oct 10 09:52:42 compute-0 sudo[130051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:42 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:42 compute-0 python3.9[130053]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 10 09:52:42 compute-0 sudo[130051]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:42 compute-0 ceph-mon[73551]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:43 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4001370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:43.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:43 compute-0 sudo[130204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdvbejwdnjrjjvbmkjrayydvhbagmta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089963.1232903-102-96339465538105/AnsiballZ_stat.py'
Oct 10 09:52:43 compute-0 sudo[130204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:43.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:43 compute-0 python3.9[130206]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.pxsm6irw follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:43 compute-0 sudo[130204]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:44 compute-0 sudo[130330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazldajttwqqudrvceqwdfzbtaoybxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089963.1232903-102-96339465538105/AnsiballZ_copy.py'
Oct 10 09:52:44 compute-0 sudo[130330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:44 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:44 compute-0 python3.9[130332]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.pxsm6irw mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089963.1232903-102-96339465538105/.source.pxsm6irw _original_basename=.dsn1f9fw follow=False checksum=2d908d3ce99ab235b2c2751c9a38992c3c685672 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:44 compute-0 sudo[130330]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:44 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:52:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:44 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:45 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:45.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:45 compute-0 ceph-mon[73551]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:45 compute-0 sudo[130485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlspvpcwtultnamkdsddvkehgaavoxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089964.8198185-147-88829189650327/AnsiballZ_setup.py'
Oct 10 09:52:45 compute-0 sudo[130485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:45 compute-0 sshd-session[70462]: Received disconnect from 38.102.83.82 port 38382:11: disconnected by user
Oct 10 09:52:45 compute-0 sshd-session[70462]: Disconnected from user zuul 38.102.83.82 port 38382
Oct 10 09:52:45 compute-0 sshd-session[70459]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:52:45 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 10 09:52:45 compute-0 systemd[1]: session-19.scope: Consumed 1min 42.826s CPU time.
Oct 10 09:52:45 compute-0 systemd-logind[806]: Session 19 logged out. Waiting for processes to exit.
Oct 10 09:52:45 compute-0 systemd-logind[806]: Removed session 19.
Oct 10 09:52:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:45.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:45 compute-0 python3.9[130487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:52:45 compute-0 sudo[130485]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:52:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:52:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:52:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:46 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4002080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:46 compute-0 sudo[130638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrdzrgfsmkegogqwmmrulrohulsqwvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089966.0300682-172-160944315025698/AnsiballZ_blockinfile.py'
Oct 10 09:52:46 compute-0 sudo[130638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:46 compute-0 python3.9[130640]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=
                                              create=True mode=0644 path=/tmp/ansible.pxsm6irw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:46 compute-0 sudo[130638]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:46 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:47 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:47.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:52:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:52:47 compute-0 ceph-mon[73551]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:47 compute-0 sudo[130791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cysljoypbkqexfuqfjavspdychafhbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089967.005682-196-23476408158140/AnsiballZ_command.py'
Oct 10 09:52:47 compute-0 sudo[130791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:47.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:47 compute-0 python3.9[130793]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pxsm6irw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:47 compute-0 sudo[130791]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:48 compute-0 sudo[130946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msliwofunwworzjjwutkxklogfnwvkwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089967.904499-220-96710917225652/AnsiballZ_file.py'
Oct 10 09:52:48 compute-0 sudo[130946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:48 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:48 compute-0 python3.9[130948]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pxsm6irw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:48 compute-0 sudo[130946]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:48 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4002080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:48 compute-0 sshd-session[129593]: Connection closed by 192.168.122.30 port 46982
Oct 10 09:52:49 compute-0 sshd-session[129590]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:52:49 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 10 09:52:49 compute-0 systemd[1]: session-46.scope: Consumed 5.694s CPU time.
Oct 10 09:52:49 compute-0 systemd-logind[806]: Session 46 logged out. Waiting for processes to exit.
Oct 10 09:52:49 compute-0 systemd-logind[806]: Removed session 46.
Oct 10 09:52:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:49 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:49.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:49 compute-0 ceph-mon[73551]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:50 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:50 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:51 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:51.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:51 compute-0 sudo[130976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:51 compute-0 sudo[130976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:51 compute-0 sudo[130976]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:51 compute-0 ceph-mon[73551]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:51.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:52 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:52 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:53 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:53.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:52:53 compute-0 ceph-mon[73551]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:54 compute-0 sshd-session[131003]: Accepted publickey for zuul from 192.168.122.30 port 59500 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:52:54 compute-0 systemd-logind[806]: New session 47 of user zuul.
Oct 10 09:52:54 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 10 09:52:54 compute-0 sshd-session[131003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:52:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:54 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:54 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:55 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:55 compute-0 python3.9[131158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:52:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:55 compute-0 ceph-mon[73551]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:56 compute-0 sudo[131313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgoboumogkwzitkvtkwslzynngbidcsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089975.5420282-56-159646296482405/AnsiballZ_systemd.py'
Oct 10 09:52:56 compute-0 sudo[131313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:56 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:56 compute-0 python3.9[131315]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 09:52:56 compute-0 sudo[131313]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:56 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4003aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:52:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:52:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:57 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:57 compute-0 sudo[131468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juysaxavatgqabzfiddlbbaptqnccoyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089976.8474755-80-168804163765060/AnsiballZ_systemd.py'
Oct 10 09:52:57 compute-0 sudo[131468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:57.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:57] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:52:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:52:57] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:52:57 compute-0 python3.9[131470]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:52:57 compute-0 sudo[131468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:57 compute-0 ceph-mon[73551]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:58 compute-0 sudo[131622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adveumasppfccllnqpcaonuysbaevoof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089977.8157997-107-146629699209589/AnsiballZ_command.py'
Oct 10 09:52:58 compute-0 sudo[131622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:58 compute-0 python3.9[131624]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:58 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.565149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978565176, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1940, "num_deletes": 251, "total_data_size": 4165076, "memory_usage": 4230480, "flush_reason": "Manual Compaction"}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978581001, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2509325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10823, "largest_seqno": 12761, "table_properties": {"data_size": 2503062, "index_size": 3206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15654, "raw_average_key_size": 20, "raw_value_size": 2489405, "raw_average_value_size": 3195, "num_data_blocks": 143, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089776, "oldest_key_time": 1760089776, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15898 microseconds, and 5942 cpu microseconds.
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:52:58 compute-0 sudo[131622]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.581041) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2509325 bytes OK
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.581064) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.583777) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.583795) EVENT_LOG_v1 {"time_micros": 1760089978583791, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.583811) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4157257, prev total WAL file size 4157257, number of live WAL files 2.
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584813) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2450KB)], [26(12MB)]
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978584841, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 15935108, "oldest_snapshot_seqno": -1}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4431 keys, 14286642 bytes, temperature: kUnknown
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978676721, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14286642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14252739, "index_size": 21697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111688, "raw_average_key_size": 25, "raw_value_size": 14167766, "raw_average_value_size": 3197, "num_data_blocks": 932, "num_entries": 4431, "num_filter_entries": 4431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.677083) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14286642 bytes
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.680212) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.2 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.7) OK, records in: 4854, records dropped: 423 output_compression: NoCompression
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.680243) EVENT_LOG_v1 {"time_micros": 1760089978680229, "job": 10, "event": "compaction_finished", "compaction_time_micros": 91993, "compaction_time_cpu_micros": 50556, "output_level": 6, "num_output_files": 1, "total_output_size": 14286642, "num_input_records": 4854, "num_output_records": 4431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978680946, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978684625, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.684752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.684759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.684761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.684763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:52:58.684765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:58 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00f4003270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:52:59 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00d4003aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:59 compute-0 sudo[131777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdbqygjbycmdcfcspwokmuqoebqxrqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089978.8118804-131-59268210742073/AnsiballZ_stat.py'
Oct 10 09:52:59 compute-0 sudo[131777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:59 compute-0 python3.9[131779]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:52:59 compute-0 sudo[131777]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:59 compute-0 ceph-mon[73551]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:52:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:52:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:59.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:00 compute-0 sudo[131930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwyniwiikiibstbzjvjmctuywuztdao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089979.7770402-158-58118097917068/AnsiballZ_file.py'
Oct 10 09:53:00 compute-0 sudo[131930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:00 compute-0 python3.9[131932]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:00 compute-0 sudo[131930]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:53:00 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[121626]: 10/10/2025 09:53:00 : epoch 68e8d733 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f00c4003c10 fd 39 proxy ignored for local
Oct 10 09:53:00 compute-0 kernel: ganesha.nfsd[129328]: segfault at 50 ip 00007f01a751c32e sp 00007f01657f9210 error 4 in libntirpc.so.5.8[7f01a7501000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 10 09:53:00 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:53:00 compute-0 systemd[1]: Started Process Core Dump (PID 131958/UID 0).
Oct 10 09:53:00 compute-0 sshd-session[131007]: Connection closed by 192.168.122.30 port 59500
Oct 10 09:53:00 compute-0 sshd-session[131003]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:00 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 10 09:53:00 compute-0 systemd[1]: session-47.scope: Consumed 4.389s CPU time.
Oct 10 09:53:00 compute-0 systemd-logind[806]: Session 47 logged out. Waiting for processes to exit.
Oct 10 09:53:00 compute-0 systemd-logind[806]: Removed session 47.
Oct 10 09:53:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:53:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:01 compute-0 ceph-mon[73551]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:01 compute-0 systemd-coredump[131959]: Process 121630 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f01a751c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 09:53:01 compute-0 systemd[1]: systemd-coredump@2-131958-0.service: Deactivated successfully.
Oct 10 09:53:01 compute-0 systemd[1]: systemd-coredump@2-131958-0.service: Consumed 1.107s CPU time.
Oct 10 09:53:02 compute-0 podman[131964]: 2025-10-10 09:53:02.046612408 +0000 UTC m=+0.039163659 container died 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2aec44a996ebfac5b1b7cc8275f29564a9fa6dc8f4f3ea4c2429596e75e075-merged.mount: Deactivated successfully.
Oct 10 09:53:02 compute-0 podman[131964]: 2025-10-10 09:53:02.087074846 +0000 UTC m=+0.079626087 container remove 804db3ea2f85fd0d1d8332f29973f3e619ec60f325456507a3c44f9173cd53e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:53:02 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:53:02 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 09:53:02 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.575s CPU time.
Oct 10 09:53:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:03.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:03 compute-0 ceph-mon[73551]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:05 compute-0 ceph-mon[73551]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:06 compute-0 sshd-session[132012]: Accepted publickey for zuul from 192.168.122.30 port 45738 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:53:06 compute-0 systemd-logind[806]: New session 48 of user zuul.
Oct 10 09:53:06 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 10 09:53:06 compute-0 sshd-session[132012]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:53:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095306 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:53:06 compute-0 sudo[132040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:53:06 compute-0 sudo[132040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:06 compute-0 sudo[132040]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:06 compute-0 sudo[132094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:53:06 compute-0 sudo[132094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:53:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:53:07 compute-0 sudo[132094]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:53:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:07 compute-0 ceph-mon[73551]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:53:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-0 sudo[132248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:53:07 compute-0 sudo[132248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:07 compute-0 sudo[132248]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:07 compute-0 python3.9[132247]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:53:07 compute-0 sudo[132273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:53:07 compute-0 sudo[132273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.289432305 +0000 UTC m=+0.066375069 container create 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:53:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:08 compute-0 systemd[1]: Started libpod-conmon-5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510.scope.
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.26366429 +0000 UTC m=+0.040607064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:08 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.376723213 +0000 UTC m=+0.153666007 container init 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.384212239 +0000 UTC m=+0.161155033 container start 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.388383061 +0000 UTC m=+0.165325865 container attach 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:53:08 compute-0 gifted_brown[132416]: 167 167
Oct 10 09:53:08 compute-0 systemd[1]: libpod-5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510.scope: Deactivated successfully.
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.390590321 +0000 UTC m=+0.167533095 container died 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-43b1dd0078aed6524c6087162a3d729aab5a57a5f95a7202fe047103010cbb33-merged.mount: Deactivated successfully.
Oct 10 09:53:08 compute-0 podman[132368]: 2025-10-10 09:53:08.432447454 +0000 UTC m=+0.209390208 container remove 5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_brown, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:53:08 compute-0 systemd[1]: libpod-conmon-5f1c5a5b8518146e835b18c21a164fc249827c4a1f5704c83d9009c5a09e8510.scope: Deactivated successfully.
Oct 10 09:53:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:08 compute-0 podman[132504]: 2025-10-10 09:53:08.60258729 +0000 UTC m=+0.054200944 container create 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:53:08 compute-0 systemd[1]: Started libpod-conmon-4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893.scope.
Oct 10 09:53:08 compute-0 sudo[132547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvqwkvucmsdcetmtkmrmaywubdeksvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089988.304284-62-138777919768283/AnsiballZ_setup.py'
Oct 10 09:53:08 compute-0 sudo[132547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:08 compute-0 podman[132504]: 2025-10-10 09:53:08.5728629 +0000 UTC m=+0.024476574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:08 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:08 compute-0 podman[132504]: 2025-10-10 09:53:08.70039301 +0000 UTC m=+0.152006694 container init 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:53:08 compute-0 podman[132504]: 2025-10-10 09:53:08.711629735 +0000 UTC m=+0.163243389 container start 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:53:08 compute-0 podman[132504]: 2025-10-10 09:53:08.714345431 +0000 UTC m=+0.165959085 container attach 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 09:53:08 compute-0 python3.9[132553]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:53:09 compute-0 pedantic_hofstadter[132551]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:53:09 compute-0 pedantic_hofstadter[132551]: --> All data devices are unavailable
Oct 10 09:53:09 compute-0 systemd[1]: libpod-4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893.scope: Deactivated successfully.
Oct 10 09:53:09 compute-0 podman[132504]: 2025-10-10 09:53:09.045405173 +0000 UTC m=+0.497018927 container died 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2cbff3b44fdfd483dc09062266408e3c5cd42d210baba62bd87838dab91d21d-merged.mount: Deactivated successfully.
Oct 10 09:53:09 compute-0 podman[132504]: 2025-10-10 09:53:09.110099107 +0000 UTC m=+0.561712801 container remove 4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:53:09 compute-0 systemd[1]: libpod-conmon-4c7dc37c8388be7349f346245175eef6fffb154c0fe024af821aab249468f893.scope: Deactivated successfully.
Oct 10 09:53:09 compute-0 sudo[132273]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:09 compute-0 sudo[132547]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:09 compute-0 sudo[132589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:53:09 compute-0 sudo[132589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:09 compute-0 sudo[132589]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:09 compute-0 sudo[132614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:53:09 compute-0 sudo[132614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:09 compute-0 sudo[132713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtsgvueeatytpdmlloekimlwfftyvbnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089988.304284-62-138777919768283/AnsiballZ_dnf.py'
Oct 10 09:53:09 compute-0 sudo[132713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:09 compute-0 ceph-mon[73551]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:09 compute-0 python3.9[132722]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.87067598 +0000 UTC m=+0.071494471 container create 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:53:09 compute-0 systemd[1]: Started libpod-conmon-98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315.scope.
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.837065338 +0000 UTC m=+0.037883919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:09 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.977470625 +0000 UTC m=+0.178289126 container init 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.986279053 +0000 UTC m=+0.187097534 container start 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.989749242 +0000 UTC m=+0.190567803 container attach 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:53:09 compute-0 reverent_varahamihira[132774]: 167 167
Oct 10 09:53:09 compute-0 systemd[1]: libpod-98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315.scope: Deactivated successfully.
Oct 10 09:53:09 compute-0 podman[132756]: 2025-10-10 09:53:09.993635805 +0000 UTC m=+0.194454316 container died 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Oct 10 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1398aaefe2614dcd8623c92b025c7a350c9e8e75e6f2cdf766eed280b15bf4db-merged.mount: Deactivated successfully.
Oct 10 09:53:10 compute-0 podman[132756]: 2025-10-10 09:53:10.052228757 +0000 UTC m=+0.253047278 container remove 98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_varahamihira, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:53:10 compute-0 systemd[1]: libpod-conmon-98e22394c163c43372f3a77c30de3b78dc180005c37e4f0b66fbfc1d69993315.scope: Deactivated successfully.
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.281263964 +0000 UTC m=+0.051056155 container create 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:53:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:10 compute-0 systemd[1]: Started libpod-conmon-73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41.scope.
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.264879066 +0000 UTC m=+0.034671247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:10 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a1117d2bcaec51ab3cf496922308f4d695943c58416d15c2e0758645f88db9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a1117d2bcaec51ab3cf496922308f4d695943c58416d15c2e0758645f88db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a1117d2bcaec51ab3cf496922308f4d695943c58416d15c2e0758645f88db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a1117d2bcaec51ab3cf496922308f4d695943c58416d15c2e0758645f88db9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.402178135 +0000 UTC m=+0.171970376 container init 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.416742775 +0000 UTC m=+0.186534976 container start 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.423572201 +0000 UTC m=+0.193364442 container attach 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]: {
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:     "0": [
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:         {
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "devices": [
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "/dev/loop3"
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             ],
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "lv_name": "ceph_lv0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "lv_size": "21470642176",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "name": "ceph_lv0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "tags": {
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.cluster_name": "ceph",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.crush_device_class": "",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.encrypted": "0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.osd_id": "0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.type": "block",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.vdo": "0",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:                 "ceph.with_tpm": "0"
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             },
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "type": "block",
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:             "vg_name": "ceph_vg0"
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:         }
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]:     ]
Oct 10 09:53:10 compute-0 suspicious_khorana[132815]: }
Oct 10 09:53:10 compute-0 systemd[1]: libpod-73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41.scope: Deactivated successfully.
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.782641707 +0000 UTC m=+0.552433938 container died 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a1117d2bcaec51ab3cf496922308f4d695943c58416d15c2e0758645f88db9-merged.mount: Deactivated successfully.
Oct 10 09:53:10 compute-0 podman[132799]: 2025-10-10 09:53:10.852867656 +0000 UTC m=+0.622659827 container remove 73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_khorana, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:53:10 compute-0 systemd[1]: libpod-conmon-73f910238fda5ba6c5d89870f6f2251bbfc01f46efe88345b7498d3b030e3c41.scope: Deactivated successfully.
Oct 10 09:53:10 compute-0 sudo[132614]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:10 compute-0 sudo[132838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:53:10 compute-0 sudo[132838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:11 compute-0 sudo[132838]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:11 compute-0 sudo[132863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:53:11 compute-0 sudo[132863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:11 compute-0 sudo[132713]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:11 compute-0 sudo[132990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:11 compute-0 sudo[132990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:11 compute-0 sudo[132990]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.572377762 +0000 UTC m=+0.040743728 container create 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:53:11 compute-0 systemd[1]: Started libpod-conmon-78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491.scope.
Oct 10 09:53:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.555825999 +0000 UTC m=+0.024191985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.661160047 +0000 UTC m=+0.129526013 container init 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.671419532 +0000 UTC m=+0.139785508 container start 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.6748712 +0000 UTC m=+0.143237186 container attach 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:53:11 compute-0 gallant_bassi[133091]: 167 167
Oct 10 09:53:11 compute-0 systemd[1]: libpod-78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491.scope: Deactivated successfully.
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.679828188 +0000 UTC m=+0.148194174 container died 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 09:53:11 compute-0 ceph-mon[73551]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2748e76c17ecc95852cfdb0137b22905c1b33b00bb33b75b852682acf6dc682-merged.mount: Deactivated successfully.
Oct 10 09:53:11 compute-0 podman[133034]: 2025-10-10 09:53:11.726711239 +0000 UTC m=+0.195077205 container remove 78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:53:11 compute-0 systemd[1]: libpod-conmon-78b50cca84f36fe4a51fc3ac703fb5c1b5dae049da68edcf17b4adf5516c2491.scope: Deactivated successfully.
Oct 10 09:53:11 compute-0 podman[133144]: 2025-10-10 09:53:11.912732697 +0000 UTC m=+0.054805813 container create cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:53:11 compute-0 python3.9[133133]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:53:11 compute-0 systemd[1]: Started libpod-conmon-cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222.scope.
Oct 10 09:53:11 compute-0 podman[133144]: 2025-10-10 09:53:11.8894084 +0000 UTC m=+0.031481566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83febd7bdd93b6e0176aa1130f5988ab36bd952f5927477d97a0655e2b43c34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83febd7bdd93b6e0176aa1130f5988ab36bd952f5927477d97a0655e2b43c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83febd7bdd93b6e0176aa1130f5988ab36bd952f5927477d97a0655e2b43c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83febd7bdd93b6e0176aa1130f5988ab36bd952f5927477d97a0655e2b43c34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:12 compute-0 podman[133144]: 2025-10-10 09:53:12.007701028 +0000 UTC m=+0.149774224 container init cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:53:12 compute-0 podman[133144]: 2025-10-10 09:53:12.017819358 +0000 UTC m=+0.159892514 container start cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:53:12 compute-0 podman[133144]: 2025-10-10 09:53:12.023159436 +0000 UTC m=+0.165232572 container attach cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct 10 09:53:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:12 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 3.
Oct 10 09:53:12 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:53:12 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.575s CPU time.
Oct 10 09:53:12 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:53:12 compute-0 podman[133289]: 2025-10-10 09:53:12.609898856 +0000 UTC m=+0.046000934 container create c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3273013b7c2f22df3bb08013777873058a29fa65dc57843084d85daaff9dac81/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3273013b7c2f22df3bb08013777873058a29fa65dc57843084d85daaff9dac81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3273013b7c2f22df3bb08013777873058a29fa65dc57843084d85daaff9dac81/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3273013b7c2f22df3bb08013777873058a29fa65dc57843084d85daaff9dac81/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:53:12 compute-0 podman[133289]: 2025-10-10 09:53:12.675800729 +0000 UTC m=+0.111902837 container init c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:53:12 compute-0 podman[133289]: 2025-10-10 09:53:12.6827982 +0000 UTC m=+0.118900278 container start c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 10 09:53:12 compute-0 podman[133289]: 2025-10-10 09:53:12.590536075 +0000 UTC m=+0.026638173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:53:12 compute-0 bash[133289]: c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c
Oct 10 09:53:12 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:53:12 compute-0 lvm[133345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:53:12 compute-0 lvm[133345]: VG ceph_vg0 finished
Oct 10 09:53:12 compute-0 pensive_boyd[133161]: {}
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:53:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:53:12 compute-0 systemd[1]: libpod-cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222.scope: Deactivated successfully.
Oct 10 09:53:12 compute-0 systemd[1]: libpod-cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222.scope: Consumed 1.231s CPU time.
Oct 10 09:53:12 compute-0 podman[133144]: 2025-10-10 09:53:12.803892707 +0000 UTC m=+0.945965843 container died cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83febd7bdd93b6e0176aa1130f5988ab36bd952f5927477d97a0655e2b43c34-merged.mount: Deactivated successfully.
Oct 10 09:53:12 compute-0 podman[133144]: 2025-10-10 09:53:12.85777487 +0000 UTC m=+0.999847976 container remove cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:53:12 compute-0 systemd[1]: libpod-conmon-cb03bff749f2476aac058d1a7c3aa46c4eac811d3cda95ba45dac11bdc84b222.scope: Deactivated successfully.
Oct 10 09:53:12 compute-0 sudo[132863]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:53:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:53:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:13 compute-0 sudo[133426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:53:13 compute-0 sudo[133426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:13 compute-0 sudo[133426]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:13 compute-0 python3.9[133524]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:53:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:13 compute-0 ceph-mon[73551]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:14 compute-0 python3.9[133675]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:53:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.964740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994964796, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 409, "num_deletes": 251, "total_data_size": 375558, "memory_usage": 383496, "flush_reason": "Manual Compaction"}
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994968195, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 372836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12762, "largest_seqno": 13170, "table_properties": {"data_size": 370373, "index_size": 563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5935, "raw_average_key_size": 18, "raw_value_size": 365412, "raw_average_value_size": 1131, "num_data_blocks": 24, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089979, "oldest_key_time": 1760089979, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 3480 microseconds, and 1594 cpu microseconds.
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.968232) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 372836 bytes OK
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.968247) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.970098) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.970111) EVENT_LOG_v1 {"time_micros": 1760089994970108, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.970126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 372984, prev total WAL file size 372984, number of live WAL files 2.
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.970568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(364KB)], [29(13MB)]
Oct 10 09:53:14 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994970594, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 14659478, "oldest_snapshot_seqno": -1}
Oct 10 09:53:14 compute-0 python3.9[133826]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:53:14 compute-0 ceph-mon[73551]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4239 keys, 12701376 bytes, temperature: kUnknown
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995045223, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12701376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12670472, "index_size": 19210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108637, "raw_average_key_size": 25, "raw_value_size": 12590501, "raw_average_value_size": 2970, "num_data_blocks": 813, "num_entries": 4239, "num_filter_entries": 4239, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.045445) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12701376 bytes
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.046507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.2 rd, 170.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(73.4) write-amplify(34.1) OK, records in: 4754, records dropped: 515 output_compression: NoCompression
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.046522) EVENT_LOG_v1 {"time_micros": 1760089995046515, "job": 12, "event": "compaction_finished", "compaction_time_micros": 74699, "compaction_time_cpu_micros": 29636, "output_level": 6, "num_output_files": 1, "total_output_size": 12701376, "num_input_records": 4754, "num_output_records": 4239, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995046662, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995049040, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:14.970502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.049124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.049133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.049136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.049140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-09:53:15.049143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:53:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:53:15 compute-0 sshd-session[132015]: Connection closed by 192.168.122.30 port 45738
Oct 10 09:53:15 compute-0 sshd-session[132012]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:15 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 10 09:53:15 compute-0 systemd[1]: session-48.scope: Consumed 6.354s CPU time.
Oct 10 09:53:15 compute-0 systemd-logind[806]: Session 48 logged out. Waiting for processes to exit.
Oct 10 09:53:15 compute-0 systemd-logind[806]: Removed session 48.
Oct 10 09:53:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 09:53:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:15.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:53:16
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'images', '.nfs', 'cephfs.cephfs.data', 'volumes', 'backups']
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:53:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:53:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:53:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:53:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:16.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:17] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:53:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:17] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:53:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:17 compute-0 ceph-mon[73551]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:53:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2752 writes, 13K keys, 2752 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2752 writes, 2752 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2752 writes, 13K keys, 2752 commit groups, 1.0 writes per commit group, ingest: 23.74 MB, 0.04 MB/s
                                           Interval WAL: 2752 writes, 2752 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    161.5      0.13              0.07         6    0.021       0      0       0.0       0.0
                                             L6      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    166.4    145.8      0.41              0.20         5    0.083     21K   2290       0.0       0.0
                                            Sum      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    127.1    149.5      0.54              0.26        11    0.049     21K   2290       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    128.0    150.4      0.54              0.26        10    0.054     21K   2290       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    166.4    145.8      0.41              0.20         5    0.083     21K   2290       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    165.7      0.12              0.07         5    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b2d7d9350#2 capacity: 304.00 MB usage: 2.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(183,2.50 MB,0.822077%) FilterBlock(12,69.42 KB,0.0223009%) IndexBlock(12,132.70 KB,0.0426292%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 09:53:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:53:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:53:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:19 compute-0 ceph-mon[73551]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:20 compute-0 sshd-session[133856]: Accepted publickey for zuul from 192.168.122.30 port 49058 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:53:20 compute-0 systemd-logind[806]: New session 49 of user zuul.
Oct 10 09:53:20 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 10 09:53:20 compute-0 sshd-session[133856]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:53:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:21.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:21 compute-0 ceph-mon[73551]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:21 compute-0 python3.9[134010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:53:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:21.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:23 compute-0 sudo[134166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdupmcrqzddbxolioexmjtlakcwfajrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090002.7424114-110-77693569723128/AnsiballZ_file.py'
Oct 10 09:53:23 compute-0 sudo[134166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:23.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:23 compute-0 python3.9[134168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:23 compute-0 ceph-mon[73551]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:23 compute-0 sudo[134166]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:23 compute-0 sudo[134318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxofyhpttdonyvpkfdlsclfblavquxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090003.5863302-110-31279260394458/AnsiballZ_file.py'
Oct 10 09:53:23 compute-0 sudo[134318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:24 compute-0 python3.9[134320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:24 compute-0 sudo[134318]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:24 compute-0 sudo[134472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgogmexaswpahbcwxklpmannvttkvowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090004.3426573-156-279087761502207/AnsiballZ_stat.py'
Oct 10 09:53:24 compute-0 sudo[134472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:53:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:53:25 compute-0 python3.9[134474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:25 compute-0 sudo[134472]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e90000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:25.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:25 compute-0 ceph-mon[73551]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:25 compute-0 sudo[134610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uddhqlxvpdsvdtxhjmnegiwbinvhsloh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090004.3426573-156-279087761502207/AnsiballZ_copy.py'
Oct 10 09:53:25 compute-0 sudo[134610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:25 compute-0 python3.9[134612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090004.3426573-156-279087761502207/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=be06f43db55c5e995ff49623560144de16c20d75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:25 compute-0 sudo[134610]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:26 compute-0 sudo[134763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnxvjyvkjvaqkkgqfetetjjdlwsvyzwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090005.9367597-156-198880081799554/AnsiballZ_stat.py'
Oct 10 09:53:26 compute-0 sudo[134763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:26 compute-0 python3.9[134765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:26 compute-0 sudo[134763]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:26 compute-0 sudo[134887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfaafolvyynrqybyuyuuloyucfhjcqbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090005.9367597-156-198880081799554/AnsiballZ_copy.py'
Oct 10 09:53:26 compute-0 sudo[134887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:26.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:27 compute-0 python3.9[134889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090005.9367597-156-198880081799554/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6d432417c0c3c485924638569c72973f4b3272fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:27 compute-0 sudo[134887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:27] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:53:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:27] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 10 09:53:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:27.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:27 compute-0 ceph-mon[73551]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:27 compute-0 sudo[135039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzjxcfpudthrzorqtgwkrfsyfccrbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090007.3231769-156-255074102463963/AnsiballZ_stat.py'
Oct 10 09:53:27 compute-0 sudo[135039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:27 compute-0 python3.9[135041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:27 compute-0 sudo[135039]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:28 compute-0 sudo[135163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsdfydxvlncmhrwsvdwdegewctavucqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090007.3231769-156-255074102463963/AnsiballZ_copy.py'
Oct 10 09:53:28 compute-0 sudo[135163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:28 compute-0 python3.9[135165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090007.3231769-156-255074102463963/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=37f94bcc30ec270348dd63c9e9f60d6c40f257d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:28 compute-0 sudo[135163]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095328 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:53:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:29 compute-0 sudo[135316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yppzdtteowtmmxpnxnkydtidsxejllpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090008.8772247-292-70425936149454/AnsiballZ_file.py'
Oct 10 09:53:29 compute-0 sudo[135316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:29 compute-0 python3.9[135318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:29 compute-0 sudo[135316]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:29 compute-0 ceph-mon[73551]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:29 compute-0 sudo[135468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrkuztmnxgnrvbjkxdkwbkdccjhduvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090009.580437-292-231613154170941/AnsiballZ_file.py'
Oct 10 09:53:29 compute-0 sudo[135468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:30 compute-0 python3.9[135470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:30 compute-0 sudo[135468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:53:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e880025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:30 compute-0 sudo[135621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzlsdcthwfaautqszsnrvmxsjxbjnyna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090010.3301752-337-29917594013348/AnsiballZ_stat.py'
Oct 10 09:53:30 compute-0 sudo[135621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:30 compute-0 python3.9[135623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:30 compute-0 sudo[135621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:31 compute-0 sudo[135745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pppvpfgewsdsszkevrsnpkbefthnmfsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090010.3301752-337-29917594013348/AnsiballZ_copy.py'
Oct 10 09:53:31 compute-0 sudo[135745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:53:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:31 compute-0 python3.9[135747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090010.3301752-337-29917594013348/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6b621d6be6e47fe7394f459f79f63e67a6dcebbf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:31 compute-0 sudo[135745]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-0 ceph-mon[73551]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:53:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:31 compute-0 sudo[135754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:31 compute-0 sudo[135754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:31 compute-0 sudo[135754]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:31 compute-0 sudo[135922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmflmqhgcffmkeofycrzxhxdrfksxlvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090011.6454384-337-209843342582082/AnsiballZ_stat.py'
Oct 10 09:53:31 compute-0 sudo[135922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:32 compute-0 python3.9[135924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:32 compute-0 sudo[135922]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:53:32 compute-0 sudo[136046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtuuskeplqmhzbntencgbsjalzkeohbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090011.6454384-337-209843342582082/AnsiballZ_copy.py'
Oct 10 09:53:32 compute-0 sudo[136046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:32 compute-0 python3.9[136048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090011.6454384-337-209843342582082/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:32 compute-0 sudo[136046]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e880025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:33 compute-0 sudo[136199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvmndhhagbhhtnweriantrdzpphgisyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090012.9220517-337-20839169668703/AnsiballZ_stat.py'
Oct 10 09:53:33 compute-0 sudo[136199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:33 compute-0 python3.9[136201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:33 compute-0 sudo[136199]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:33 compute-0 ceph-mon[73551]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:53:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:33.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:33 compute-0 sudo[136322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqfttxelqhkyvlcyrdfbveqsvaubsvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090012.9220517-337-20839169668703/AnsiballZ_copy.py'
Oct 10 09:53:33 compute-0 sudo[136322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:34 compute-0 python3.9[136324]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090012.9220517-337-20839169668703/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=96d5f7eb237fad64de662f4b4922d8b7ccaf78a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:34 compute-0 sudo[136322]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:34 compute-0 sudo[136476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egvydcnotxgtcbpmpdpdsiheeajlbcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090014.255855-466-276701448597679/AnsiballZ_file.py'
Oct 10 09:53:34 compute-0 sudo[136476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:35 compute-0 python3.9[136478]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:35 compute-0 sudo[136476]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e880032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:35 compute-0 sudo[136628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknehyvsclmvefmwolxggvhhvipqgiol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090015.1779885-466-21199216168657/AnsiballZ_file.py'
Oct 10 09:53:35 compute-0 sudo[136628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:35 compute-0 ceph-mon[73551]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:35 compute-0 python3.9[136630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:35 compute-0 sudo[136628]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:35.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:36 compute-0 sudo[136781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdnodtetyrgeykbtzpvvibgzcaxagua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090015.884125-514-255150058908432/AnsiballZ_stat.py'
Oct 10 09:53:36 compute-0 sudo[136781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:36 compute-0 python3.9[136783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:36 compute-0 sudo[136781]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:36 compute-0 sudo[136905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfexdmlgwnczzgkocoqnsdezevlllxpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090015.884125-514-255150058908432/AnsiballZ_copy.py'
Oct 10 09:53:36 compute-0 sudo[136905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:36 compute-0 python3.9[136907]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090015.884125-514-255150058908432/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5d7a0f8959da299d1d26e0676f3ea080656af126 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:36.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:36 compute-0 sudo[136905]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:37 compute-0 sudo[137057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvksinspobhqwpmkmdysoixohwgqtcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090017.1205275-514-1803079097201/AnsiballZ_stat.py'
Oct 10 09:53:37 compute-0 sudo[137057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:53:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:53:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000086s ======
Oct 10 09:53:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:37.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000086s
Oct 10 09:53:37 compute-0 python3.9[137059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:37 compute-0 ceph-mon[73551]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:37 compute-0 sudo[137057]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:37.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:37 compute-0 sudo[137180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdrordhkmwdgetnihbqkhpukpzariodk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090017.1205275-514-1803079097201/AnsiballZ_copy.py'
Oct 10 09:53:37 compute-0 sudo[137180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:38 compute-0 python3.9[137182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090017.1205275-514-1803079097201/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:38 compute-0 sudo[137180]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e880032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:38 compute-0 sudo[137333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafehoksdafcmtujfjpcroldzinhlmnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090018.344022-514-218826678718315/AnsiballZ_stat.py'
Oct 10 09:53:38 compute-0 sudo[137333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:38 compute-0 python3.9[137335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:38 compute-0 sudo[137333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:39 compute-0 sudo[137457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcwheppergzsqqscxyrznbfqlnaftyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090018.344022-514-218826678718315/AnsiballZ_copy.py'
Oct 10 09:53:39 compute-0 sudo[137457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:39 compute-0 python3.9[137459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090018.344022-514-218826678718315/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0317ddd747d49bb5425b3cca69dcb8ff5cd5973c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:39 compute-0 sudo[137457]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:39 compute-0 ceph-mon[73551]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:39.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:40 compute-0 sudo[137610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfzoepoturnscpbjjkenhikqazrtjmuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090020.17024-682-126983025958598/AnsiballZ_file.py'
Oct 10 09:53:40 compute-0 sudo[137610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:40 compute-0 python3.9[137612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:40 compute-0 sudo[137610]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:41 compute-0 sudo[137763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoocblrvyfytlywqzhkltzcunjqswzab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090020.8776023-708-178377703278565/AnsiballZ_stat.py'
Oct 10 09:53:41 compute-0 sudo[137763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:41 compute-0 python3.9[137765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:41 compute-0 sudo[137763]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:41 compute-0 ceph-mon[73551]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:41.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:41 compute-0 sudo[137886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amoitgcewqrvdextajrironekvyyhkmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090020.8776023-708-178377703278565/AnsiballZ_copy.py'
Oct 10 09:53:41 compute-0 sudo[137886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:42 compute-0 python3.9[137888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090020.8776023-708-178377703278565/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:42 compute-0 sudo[137886]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:42 compute-0 sudo[138039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgyuyzffovpihusxhefoaxdxtqeerxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090022.2178931-765-244002618283964/AnsiballZ_file.py'
Oct 10 09:53:42 compute-0 sudo[138039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:42 compute-0 python3.9[138041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:42 compute-0 sudo[138039]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:43 compute-0 sudo[138192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbbsdqixtycvnzpeuuvnsphusxrxpjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090022.908975-789-143105722604577/AnsiballZ_stat.py'
Oct 10 09:53:43 compute-0 sudo[138192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:43 compute-0 python3.9[138194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:43 compute-0 sudo[138192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:43.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:43 compute-0 ceph-mon[73551]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:43 compute-0 sudo[138315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkmympdnzolcjtvrpgpfxotctoithig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090022.908975-789-143105722604577/AnsiballZ_copy.py'
Oct 10 09:53:43 compute-0 sudo[138315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:44 compute-0 python3.9[138317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090022.908975-789-143105722604577/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:44 compute-0 sudo[138315]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:44 compute-0 sudo[138468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktnvbhlbeojnhydauukklsrqepnsctqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090024.3368769-836-213545942289966/AnsiballZ_file.py'
Oct 10 09:53:44 compute-0 sudo[138468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:44 compute-0 python3.9[138470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:44 compute-0 sudo[138468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:45.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:45 compute-0 sudo[138621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nimhtmanwnygxqvwaheakespriccdjfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090025.091376-859-66839198748379/AnsiballZ_stat.py'
Oct 10 09:53:45 compute-0 sudo[138621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:45 compute-0 ceph-mon[73551]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:45 compute-0 python3.9[138623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:45 compute-0 sudo[138621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:46 compute-0 sudo[138745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzogosradjmmcmdlbkpbaltxbdvtcmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090025.091376-859-66839198748379/AnsiballZ_copy.py'
Oct 10 09:53:46 compute-0 sudo[138745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:53:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:46 compute-0 python3.9[138747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090025.091376-859-66839198748379/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:46 compute-0 sudo[138745]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:53:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:53:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:46 compute-0 sudo[138898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtxhjsczmwhrusccljahblopanldgjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090026.5581377-908-97646581049055/AnsiballZ_file.py'
Oct 10 09:53:46 compute-0 sudo[138898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:46.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:47 compute-0 python3.9[138900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:47 compute-0 sudo[138898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:53:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:53:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:47 compute-0 sudo[139050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofwrpuebltehqhtpllpxryljfaerdnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090027.3293085-934-104392207948659/AnsiballZ_stat.py'
Oct 10 09:53:47 compute-0 sudo[139050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:47 compute-0 ceph-mon[73551]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:47 compute-0 python3.9[139052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:47 compute-0 sudo[139050]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:48 compute-0 sudo[139174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpkgulykwadwhcukacqyhjqjobvozes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090027.3293085-934-104392207948659/AnsiballZ_copy.py'
Oct 10 09:53:48 compute-0 sudo[139174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:48 compute-0 python3.9[139176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090027.3293085-934-104392207948659/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:48 compute-0 sudo[139174]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:49 compute-0 sudo[139327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxjaowsmgxqftnovkcztnurrhiorzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090028.725747-983-208717932522574/AnsiballZ_file.py'
Oct 10 09:53:49 compute-0 sudo[139327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:49 compute-0 python3.9[139329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:49 compute-0 sudo[139327]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:49.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:49 compute-0 sudo[139479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orkxjernooocevnmkzacwvptnyvplrod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090029.4411085-1010-170075637873048/AnsiballZ_stat.py'
Oct 10 09:53:49 compute-0 sudo[139479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:49 compute-0 ceph-mon[73551]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:49 compute-0 python3.9[139481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:49 compute-0 sudo[139479]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:50 compute-0 sudo[139603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzbkzpbaoqbfowrbtzyceyfpqobxpjle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090029.4411085-1010-170075637873048/AnsiballZ_copy.py'
Oct 10 09:53:50 compute-0 sudo[139603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:50 compute-0 python3.9[139605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090029.4411085-1010-170075637873048/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:50 compute-0 sudo[139603]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:51 compute-0 sudo[139756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxubfoslcdwrcrcaoitvvsryvhzelaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090030.9680445-1046-36107761668718/AnsiballZ_file.py'
Oct 10 09:53:51 compute-0 sudo[139756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:51 compute-0 python3.9[139758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:51 compute-0 sudo[139756]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:51 compute-0 sudo[139783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:51 compute-0 sudo[139783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:51 compute-0 sudo[139783]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:51 compute-0 ceph-mon[73551]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:52 compute-0 sudo[139934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wykpxgaxunrnbaagrdnapuvvawmjopgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090031.7304611-1063-280092799377820/AnsiballZ_stat.py'
Oct 10 09:53:52 compute-0 sudo[139934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:52 compute-0 python3.9[139936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:52 compute-0 sudo[139934]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:52 compute-0 sudo[140057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvaenzxgsernywkquzpigrdmzhbvqgqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090031.7304611-1063-280092799377820/AnsiballZ_copy.py'
Oct 10 09:53:52 compute-0 sudo[140057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:52 compute-0 python3.9[140059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090031.7304611-1063-280092799377820/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:52 compute-0 sudo[140057]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:53 compute-0 ceph-mon[73551]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000043s ======
Oct 10 09:53:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000043s
Oct 10 09:53:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:55.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:55 compute-0 ceph-mon[73551]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:56 compute-0 sshd-session[133859]: Connection closed by 192.168.122.30 port 49058
Oct 10 09:53:56 compute-0 sshd-session[133856]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:56 compute-0 systemd-logind[806]: Session 49 logged out. Waiting for processes to exit.
Oct 10 09:53:56 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 10 09:53:56 compute-0 systemd[1]: session-49.scope: Consumed 25.340s CPU time.
Oct 10 09:53:56 compute-0 systemd-logind[806]: Removed session 49.
Oct 10 09:53:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:53:56.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:53:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:53:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:53:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:53:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:57 compute-0 ceph-mon[73551]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:53:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 10 09:53:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:59.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:53:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:59.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:59 compute-0 ceph-mon[73551]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:54:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:01 compute-0 ceph-mon[73551]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:02 compute-0 sshd-session[140096]: Accepted publickey for zuul from 192.168.122.30 port 45092 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:54:02 compute-0 systemd-logind[806]: New session 50 of user zuul.
Oct 10 09:54:02 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 10 09:54:02 compute-0 sshd-session[140096]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:54:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:54:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:02 compute-0 sudo[140251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooyxdbylgxrxennehvciysmzgpbmkoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090042.1314287-26-265145278032615/AnsiballZ_file.py'
Oct 10 09:54:02 compute-0 sudo[140251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:03 compute-0 python3.9[140253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:03 compute-0 sudo[140251]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:03.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:03 compute-0 sudo[140403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brthtbpqrdwkpavtxzwtzznoawqpkidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090043.2752867-62-142556004156177/AnsiballZ_stat.py'
Oct 10 09:54:03 compute-0 sudo[140403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:03 compute-0 ceph-mon[73551]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:54:04 compute-0 python3.9[140405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:04 compute-0 sudo[140403]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:04 compute-0 sudo[140527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwseofgsevzrvgolmcovoxzgphzwscm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090043.2752867-62-142556004156177/AnsiballZ_copy.py'
Oct 10 09:54:04 compute-0 sudo[140527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:04 compute-0 python3.9[140529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090043.2752867-62-142556004156177/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f4f20d3bcbb08befb7837fd0e595f186c33a7cc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:04 compute-0 sudo[140527]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:05 compute-0 sudo[140680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hutofbqvgenvarlidgowntloaadbyjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090044.897043-62-42972958656971/AnsiballZ_stat.py'
Oct 10 09:54:05 compute-0 sudo[140680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:05 compute-0 python3.9[140682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:05 compute-0 sudo[140680]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:05.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:05 compute-0 sudo[140803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchurgkoclqzgiuhdpyttcfmwnynvwze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090044.897043-62-42972958656971/AnsiballZ_copy.py'
Oct 10 09:54:05 compute-0 sudo[140803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:05 compute-0 ceph-mon[73551]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:05 compute-0 python3.9[140805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090044.897043-62-42972958656971/.source.conf _original_basename=ceph.conf follow=False checksum=1a4b9adde8f120db415fb0ad56382b109e0fedc1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:05 compute-0 sudo[140803]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:06 compute-0 sshd-session[140100]: Connection closed by 192.168.122.30 port 45092
Oct 10 09:54:06 compute-0 sshd-session[140096]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:54:06 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 10 09:54:06 compute-0 systemd[1]: session-50.scope: Consumed 2.904s CPU time.
Oct 10 09:54:06 compute-0 systemd-logind[806]: Session 50 logged out. Waiting for processes to exit.
Oct 10 09:54:06 compute-0 systemd-logind[806]: Removed session 50.
Oct 10 09:54:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:06.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:54:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:54:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:54:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:07.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:54:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:54:07 compute-0 ceph-mon[73551]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:09.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:09 compute-0 ceph-mon[73551]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:11.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:11 compute-0 sudo[140836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:11 compute-0 sudo[140836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:11 compute-0 sudo[140836]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:11 compute-0 ceph-mon[73551]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:11 compute-0 sshd-session[140861]: Accepted publickey for zuul from 192.168.122.30 port 59414 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:54:11 compute-0 systemd-logind[806]: New session 51 of user zuul.
Oct 10 09:54:11 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 10 09:54:12 compute-0 sshd-session[140861]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:54:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:13 compute-0 python3.9[141016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:13 compute-0 sudo[141021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:13 compute-0 sudo[141021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-0 sudo[141021]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:13 compute-0 sudo[141046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 09:54:13 compute-0 sudo[141046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:13.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:13 compute-0 sudo[141046]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:54:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:54:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 sudo[141138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:13 compute-0 sudo[141138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-0 sudo[141138]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:54:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:54:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 sudo[141192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:54:13 compute-0 sudo[141192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-0 ceph-mon[73551]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-0 sudo[141308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bheycyetpidejajfksmgujnlgtqqqggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090053.7724116-62-247587636420950/AnsiballZ_file.py'
Oct 10 09:54:14 compute-0 sudo[141308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:14 compute-0 sudo[141192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-0 python3.9[141312]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:14 compute-0 sudo[141308]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:14 compute-0 sudo[141425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:14 compute-0 sudo[141425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:14 compute-0 sudo[141425]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-0 sudo[141474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:54:14 compute-0 sudo[141474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:14 compute-0 sudo[141524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbblakubvayumpsrofpbfbrcjcdxmyqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090054.647062-62-99134749563357/AnsiballZ_file.py'
Oct 10 09:54:14 compute-0 sudo[141524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:54:14 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:15 compute-0 python3.9[141527]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:15 compute-0 sudo[141524]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.376032725 +0000 UTC m=+0.049911929 container create 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:54:15 compute-0 systemd[1]: Started libpod-conmon-58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1.scope.
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.354575466 +0000 UTC m=+0.028454690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.472667302 +0000 UTC m=+0.146546486 container init 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:54:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.484593779 +0000 UTC m=+0.158472963 container start 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.48809945 +0000 UTC m=+0.161978634 container attach 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:54:15 compute-0 cool_gates[141616]: 167 167
Oct 10 09:54:15 compute-0 systemd[1]: libpod-58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1.scope: Deactivated successfully.
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.492478289 +0000 UTC m=+0.166357483 container died 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2de27f59240d5d0204388cbc3565beca13b11262256b6839f1afe64a67c108a0-merged.mount: Deactivated successfully.
Oct 10 09:54:15 compute-0 podman[141595]: 2025-10-10 09:54:15.54058067 +0000 UTC m=+0.214459854 container remove 58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 09:54:15 compute-0 systemd[1]: libpod-conmon-58fc1d66fc04e941c9f5253747c55d69a6396ba0d1c7b585ccdb94b0bc713bc1.scope: Deactivated successfully.
Oct 10 09:54:15 compute-0 podman[141712]: 2025-10-10 09:54:15.709502074 +0000 UTC m=+0.042831266 container create a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:54:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:15 compute-0 systemd[1]: Started libpod-conmon-a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2.scope.
Oct 10 09:54:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:15 compute-0 podman[141712]: 2025-10-10 09:54:15.689925405 +0000 UTC m=+0.023254627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:15 compute-0 podman[141712]: 2025-10-10 09:54:15.79981466 +0000 UTC m=+0.133143872 container init a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:54:15 compute-0 podman[141712]: 2025-10-10 09:54:15.809838108 +0000 UTC m=+0.143167300 container start a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:54:15 compute-0 podman[141712]: 2025-10-10 09:54:15.815190077 +0000 UTC m=+0.148519289 container attach a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:54:15 compute-0 ceph-mon[73551]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:16 compute-0 python3.9[141782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:16 compute-0 quizzical_villani[141774]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:54:16 compute-0 quizzical_villani[141774]: --> All data devices are unavailable
Oct 10 09:54:16 compute-0 systemd[1]: libpod-a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2.scope: Deactivated successfully.
Oct 10 09:54:16 compute-0 podman[141712]: 2025-10-10 09:54:16.194069113 +0000 UTC m=+0.527398295 container died a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 09:54:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095416 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ab855d904d9efad49200bc7aaab23500318ae4352e3535046a3ee56a96b332f-merged.mount: Deactivated successfully.
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:54:16
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'volumes', '.nfs']
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:54:16 compute-0 podman[141712]: 2025-10-10 09:54:16.24616272 +0000 UTC m=+0.579491932 container remove a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:54:16 compute-0 systemd[1]: libpod-conmon-a8b5312bf6aaaa262d8776ebb4723af74c75a9f485b836f2db55bf2ee4e093d2.scope: Deactivated successfully.
Oct 10 09:54:16 compute-0 sudo[141474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:54:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:16 compute-0 sudo[141830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:16 compute-0 sudo[141830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:16 compute-0 sudo[141830]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:54:16 compute-0 sudo[141878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:54:16 compute-0 sudo[141878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:54:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:54:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.879656329 +0000 UTC m=+0.049639612 container create b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 09:54:16 compute-0 systemd[1]: Started libpod-conmon-b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0.scope.
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.860145992 +0000 UTC m=+0.030129305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.969809901 +0000 UTC m=+0.139793244 container init b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:54:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:16 compute-0 ceph-mon[73551]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.983386181 +0000 UTC m=+0.153369474 container start b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.98686473 +0000 UTC m=+0.156848063 container attach b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 09:54:16 compute-0 romantic_wu[142032]: 167 167
Oct 10 09:54:16 compute-0 systemd[1]: libpod-b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0.scope: Deactivated successfully.
Oct 10 09:54:16 compute-0 podman[141996]: 2025-10-10 09:54:16.990742283 +0000 UTC m=+0.160725566 container died b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:54:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:16.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:54:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:16.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-413fa42509540e6745d71b2608d271c725a0f77abfed7c20b815daf88c71acf4-merged.mount: Deactivated successfully.
Oct 10 09:54:17 compute-0 podman[141996]: 2025-10-10 09:54:17.03362231 +0000 UTC m=+0.203605593 container remove b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_wu, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:54:17 compute-0 sudo[142074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejxgsoavczietepxdkrcphrkpzuzcwqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090056.3729637-131-63364085146242/AnsiballZ_seboolean.py'
Oct 10 09:54:17 compute-0 sudo[142074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:17 compute-0 systemd[1]: libpod-conmon-b4e0da32580f409e42dc3c116a88998ee8fa5d0e7055531749304e031d664cc0.scope: Deactivated successfully.
Oct 10 09:54:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.203132192 +0000 UTC m=+0.058271385 container create 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:54:17 compute-0 python3.9[142081]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 09:54:17 compute-0 systemd[1]: Started libpod-conmon-81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2.scope.
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.182371484 +0000 UTC m=+0.037510687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:17 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca39a71a57cfa42de734ede5a7236cec35dbb9518880b19234df9077e18be37c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca39a71a57cfa42de734ede5a7236cec35dbb9518880b19234df9077e18be37c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca39a71a57cfa42de734ede5a7236cec35dbb9518880b19234df9077e18be37c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca39a71a57cfa42de734ede5a7236cec35dbb9518880b19234df9077e18be37c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.314555636 +0000 UTC m=+0.169694899 container init 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.323416856 +0000 UTC m=+0.178556029 container start 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.327068572 +0000 UTC m=+0.182207835 container attach 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:54:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:17] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 10 09:54:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:17] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 10 09:54:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:17.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]: {
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:     "0": [
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:         {
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "devices": [
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "/dev/loop3"
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             ],
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "lv_name": "ceph_lv0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "lv_size": "21470642176",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "name": "ceph_lv0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "tags": {
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.cluster_name": "ceph",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.crush_device_class": "",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.encrypted": "0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.osd_id": "0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.type": "block",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.vdo": "0",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:                 "ceph.with_tpm": "0"
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             },
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "type": "block",
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:             "vg_name": "ceph_vg0"
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:         }
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]:     ]
Oct 10 09:54:17 compute-0 gallant_wilbur[142106]: }
Oct 10 09:54:17 compute-0 systemd[1]: libpod-81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2.scope: Deactivated successfully.
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.649213783 +0000 UTC m=+0.504352976 container died 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca39a71a57cfa42de734ede5a7236cec35dbb9518880b19234df9077e18be37c-merged.mount: Deactivated successfully.
Oct 10 09:54:17 compute-0 podman[142089]: 2025-10-10 09:54:17.70569583 +0000 UTC m=+0.560835013 container remove 81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:54:17 compute-0 systemd[1]: libpod-conmon-81409670954b30d69658714aaee5b487f9c59b76444f0ff583f7e74e4d59ccb2.scope: Deactivated successfully.
Oct 10 09:54:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:17 compute-0 sudo[141878]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:17 compute-0 sudo[142130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:17 compute-0 sudo[142130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:17 compute-0 sudo[142130]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:17 compute-0 sudo[142155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:54:17 compute-0 sudo[142155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.364030884 +0000 UTC m=+0.070027437 container create 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:54:18 compute-0 systemd[1]: Started libpod-conmon-84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053.scope.
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.337217076 +0000 UTC m=+0.043213609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.47014326 +0000 UTC m=+0.176139823 container init 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.478943919 +0000 UTC m=+0.184940442 container start 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.484854186 +0000 UTC m=+0.190850729 container attach 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:54:18 compute-0 nervous_boyd[142238]: 167 167
Oct 10 09:54:18 compute-0 systemd[1]: libpod-84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053.scope: Deactivated successfully.
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.487379086 +0000 UTC m=+0.193375639 container died 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:54:18 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 10 09:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6e1ad89d59909cb47f4f6f52e77a9d8073259720b05449686c2cc2149cdd2e-merged.mount: Deactivated successfully.
Oct 10 09:54:18 compute-0 podman[142222]: 2025-10-10 09:54:18.548236751 +0000 UTC m=+0.254233314 container remove 84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:54:18 compute-0 systemd[1]: libpod-conmon-84d6f86a9bfca27a6b37d8499da364485f353850508ef010151edf2c6766f053.scope: Deactivated successfully.
Oct 10 09:54:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:18 compute-0 podman[142267]: 2025-10-10 09:54:18.765597107 +0000 UTC m=+0.054191926 container create d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 09:54:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:18 compute-0 systemd[1]: Started libpod-conmon-d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496.scope.
Oct 10 09:54:18 compute-0 podman[142267]: 2025-10-10 09:54:18.73757754 +0000 UTC m=+0.026172359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350ca7d3ce7747371dc827f5b1b98b22855546cf033bc72392a4860a871870b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350ca7d3ce7747371dc827f5b1b98b22855546cf033bc72392a4860a871870b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350ca7d3ce7747371dc827f5b1b98b22855546cf033bc72392a4860a871870b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350ca7d3ce7747371dc827f5b1b98b22855546cf033bc72392a4860a871870b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:18 compute-0 podman[142267]: 2025-10-10 09:54:18.894610508 +0000 UTC m=+0.183205327 container init d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:54:18 compute-0 podman[142267]: 2025-10-10 09:54:18.904596063 +0000 UTC m=+0.193190872 container start d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:54:18 compute-0 podman[142267]: 2025-10-10 09:54:18.913204226 +0000 UTC m=+0.201799045 container attach d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:54:18 compute-0 sudo[142074]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:19 compute-0 ceph-mon[73551]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:19.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:19 compute-0 sudo[142506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfbgehxilwjxyqhfzsfjeilbvoyfmko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090059.312534-161-14061193318638/AnsiballZ_setup.py'
Oct 10 09:54:19 compute-0 sudo[142506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:19 compute-0 lvm[142509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:54:19 compute-0 lvm[142509]: VG ceph_vg0 finished
Oct 10 09:54:19 compute-0 quirky_bardeen[142284]: {}
Oct 10 09:54:19 compute-0 systemd[1]: libpod-d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496.scope: Deactivated successfully.
Oct 10 09:54:19 compute-0 systemd[1]: libpod-d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496.scope: Consumed 1.356s CPU time.
Oct 10 09:54:19 compute-0 conmon[142284]: conmon d99ce735ab3df2df3ca6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496.scope/container/memory.events
Oct 10 09:54:19 compute-0 podman[142267]: 2025-10-10 09:54:19.713055428 +0000 UTC m=+1.001650237 container died d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:54:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:19.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d350ca7d3ce7747371dc827f5b1b98b22855546cf033bc72392a4860a871870b-merged.mount: Deactivated successfully.
Oct 10 09:54:19 compute-0 podman[142267]: 2025-10-10 09:54:19.76845514 +0000 UTC m=+1.057049939 container remove d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_bardeen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:54:19 compute-0 systemd[1]: libpod-conmon-d99ce735ab3df2df3ca627d7561dc769405d35c9c0db1fb61a4abd575a1fe496.scope: Deactivated successfully.
Oct 10 09:54:19 compute-0 sudo[142155]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:54:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:54:19 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:19 compute-0 sudo[142528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:54:19 compute-0 python3.9[142510]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:54:19 compute-0 sudo[142528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:19 compute-0 sudo[142528]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:20 compute-0 sudo[142506]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:20 compute-0 sudo[142635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lulitaeldskalxzeihraygtoucapcueo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090059.312534-161-14061193318638/AnsiballZ_dnf.py'
Oct 10 09:54:20 compute-0 sudo[142635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:20 compute-0 python3.9[142637]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:54:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 09:54:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:21.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 09:54:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:54:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:21.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:54:21 compute-0 ceph-mon[73551]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:22 compute-0 sudo[142635]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:23 compute-0 sudo[142791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnbfzzgfxogpddoghmjqreiexzmhonjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090062.4513342-197-34211129539575/AnsiballZ_systemd.py'
Oct 10 09:54:23 compute-0 sudo[142791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:23 compute-0 python3.9[142793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:54:23 compute-0 sudo[142791]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:23.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:23 compute-0 ceph-mon[73551]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:24 compute-0 sudo[142947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btottgrqiidpxetrinnxzekrogwggjuf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090063.734086-221-39525118509098/AnsiballZ_edpm_nftables_snippet.py'
Oct 10 09:54:24 compute-0 sudo[142947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:24 compute-0 python3[142949]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 10 09:54:24 compute-0 sudo[142947]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:25 compute-0 sudo[143100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfexjesgouqmxveazyqavkkhxtvnqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090064.7366908-248-71329372711371/AnsiballZ_file.py'
Oct 10 09:54:25 compute-0 sudo[143100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:25 compute-0 python3.9[143102]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:25 compute-0 sudo[143100]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:54:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:25.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:25 compute-0 ceph-mon[73551]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:26 compute-0 sudo[143253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgihbvzbyodswfejkdfulxrzeomagxvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090065.5822222-272-120083833080855/AnsiballZ_stat.py'
Oct 10 09:54:26 compute-0 sudo[143253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:26 compute-0 python3.9[143255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:26 compute-0 sudo[143253]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:26 compute-0 sudo[143331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pclgbxhamyrmwuxwtrevmhbkhzqtyiio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090065.5822222-272-120083833080855/AnsiballZ_file.py'
Oct 10 09:54:26 compute-0 sudo[143331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:26 compute-0 python3.9[143333]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:26 compute-0 sudo[143331]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:26.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:54:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:26.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:54:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:26.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:54:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:27] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 10 09:54:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:27] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Oct 10 09:54:27 compute-0 sudo[143484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnqhkvdswqebbbraocjqcgqcrixxppv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090067.220154-308-154570883312604/AnsiballZ_stat.py'
Oct 10 09:54:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:27 compute-0 sudo[143484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:27 compute-0 python3.9[143486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095427 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:54:27 compute-0 sudo[143484]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:27 compute-0 ceph-mon[73551]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:27 compute-0 sudo[143562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzbbbuycazaehxnqiipguvgjzlimqmbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090067.220154-308-154570883312604/AnsiballZ_file.py'
Oct 10 09:54:27 compute-0 sudo[143562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:28 compute-0 python3.9[143564]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.934o6txe recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:28 compute-0 sudo[143562]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:54:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:28 compute-0 sudo[143717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsezbpjvuafpddpuzmuzukbatzhrdxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090068.4714823-344-63654971200996/AnsiballZ_stat.py'
Oct 10 09:54:28 compute-0 sudo[143717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:28 compute-0 python3.9[143719]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:29 compute-0 sudo[143717]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:29 compute-0 sudo[143795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxbpilstglapnbkgdwjcisycnxhboien ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090068.4714823-344-63654971200996/AnsiballZ_file.py'
Oct 10 09:54:29 compute-0 sudo[143795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:29.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:29 compute-0 python3.9[143797]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:29 compute-0 sudo[143795]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:29 compute-0 ceph-mon[73551]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:30 compute-0 sudo[143948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qicvwcpznctpfcrosuwcannvrsfoptyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090069.8333557-383-188247304229036/AnsiballZ_command.py'
Oct 10 09:54:30 compute-0 sudo[143948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:30 compute-0 python3.9[143950]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:30 compute-0 sudo[143948]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:31 compute-0 ceph-mon[73551]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:54:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:31.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:31 compute-0 sudo[144102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbpfzoqqhwsrrxvcvgrgvgloixpfgrfi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090071.129156-407-96974769252881/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:54:31 compute-0 sudo[144102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:31 compute-0 python3[144104]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:54:31 compute-0 sudo[144102]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:31 compute-0 sudo[144105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:31 compute-0 sudo[144105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:31 compute-0 sudo[144105]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:32 compute-0 sudo[144280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffieetsvqrsikbddwhfdojbbcxelbdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090072.0919604-431-112715141251781/AnsiballZ_stat.py'
Oct 10 09:54:32 compute-0 sudo[144280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:32 compute-0 python3.9[144282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:32 compute-0 sudo[144280]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:33 compute-0 ceph-mon[73551]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:33 compute-0 sudo[144406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoqdlrkqzxivubolufctxqeaabdklgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090072.0919604-431-112715141251781/AnsiballZ_copy.py'
Oct 10 09:54:33 compute-0 sudo[144406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:33 compute-0 python3.9[144408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090072.0919604-431-112715141251781/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:33 compute-0 sudo[144406]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:33 compute-0 sudo[144558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bekgnpewunqiexgxtsgxlvhzkyqihjzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090073.5899088-476-47821774186385/AnsiballZ_stat.py'
Oct 10 09:54:33 compute-0 sudo[144558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:34 compute-0 python3.9[144560]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:34 compute-0 sudo[144558]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:34 compute-0 sudo[144685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggybbqluumfieepntplwkduwtsygwptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090073.5899088-476-47821774186385/AnsiballZ_copy.py'
Oct 10 09:54:34 compute-0 sudo[144685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:35 compute-0 python3.9[144687]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090073.5899088-476-47821774186385/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:35 compute-0 sudo[144685]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:35 compute-0 ceph-mon[73551]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:35.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:35 compute-0 sudo[144837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjylcijncxviwokdqkmigbyxwfovzdfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090075.2517273-521-150515170984014/AnsiballZ_stat.py'
Oct 10 09:54:35 compute-0 sudo[144837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:35 compute-0 python3.9[144839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:35 compute-0 sudo[144837]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:36 compute-0 sudo[144963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whprtbaxrwjswinrglxnvrfrqndhjdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090075.2517273-521-150515170984014/AnsiballZ_copy.py'
Oct 10 09:54:36 compute-0 sudo[144963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:36 compute-0 python3.9[144965]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090075.2517273-521-150515170984014/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:36 compute-0 sudo[144963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:54:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:36.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:54:37 compute-0 sudo[145117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clueepqftljefmalxlcnheapjxqhqvxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090076.7684546-566-79220513720906/AnsiballZ_stat.py'
Oct 10 09:54:37 compute-0 sudo[145117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:37] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:54:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:37] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:54:37 compute-0 python3.9[145119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:37 compute-0 sudo[145117]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:37 compute-0 ceph-mon[73551]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:37.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:37 compute-0 sudo[145242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsfaganzhualfgjphvbjtaryfxsmubvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090076.7684546-566-79220513720906/AnsiballZ_copy.py'
Oct 10 09:54:37 compute-0 sudo[145242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:38 compute-0 python3.9[145244]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090076.7684546-566-79220513720906/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:38 compute-0 sudo[145242]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:54:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:39 compute-0 sudo[145396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guyaikzukxvbpbskytkatliootwkphnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090078.391682-611-122914667660083/AnsiballZ_stat.py'
Oct 10 09:54:39 compute-0 sudo[145396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:39 compute-0 python3.9[145398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:39 compute-0 sudo[145396]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:39 compute-0 ceph-mon[73551]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:54:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:54:39 compute-0 sudo[145523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqiciwkcqtbumuupgynaecenfkbjvwfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090078.391682-611-122914667660083/AnsiballZ_copy.py'
Oct 10 09:54:39 compute-0 sudo[145523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:39 compute-0 python3.9[145525]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090078.391682-611-122914667660083/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:39 compute-0 sudo[145523]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 09:54:40 compute-0 sudo[145676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxnjlflogdhiifaqenbsscsaomdvztih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090080.201183-656-126389202044260/AnsiballZ_file.py'
Oct 10 09:54:40 compute-0 sudo[145676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:40 compute-0 python3.9[145678]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:40 compute-0 sudo[145676]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:41 compute-0 sudo[145829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzlcdefwcgmtomuazejlpqugaowthbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090080.9823902-680-259431577845081/AnsiballZ_command.py'
Oct 10 09:54:41 compute-0 sudo[145829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:41 compute-0 python3.9[145831]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:41 compute-0 ceph-mon[73551]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 09:54:41 compute-0 sudo[145829]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:41.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095442 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:54:42 compute-0 sudo[145985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntobwxucbmwmbashtjfvhljuhsnnjmag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090081.8399417-704-117619473593213/AnsiballZ_blockinfile.py'
Oct 10 09:54:42 compute-0 sudo[145985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 09:54:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:54:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:42 compute-0 python3.9[145987]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:42 compute-0 sudo[145985]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:43 compute-0 sudo[146138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtmdkjngxzacwgdlmcvyafydpdnpucus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090082.9222379-731-167112435958646/AnsiballZ_command.py'
Oct 10 09:54:43 compute-0 sudo[146138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:43 compute-0 python3.9[146140]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:43 compute-0 ceph-mon[73551]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 09:54:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:43 compute-0 sudo[146138]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:43.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:44 compute-0 sudo[146292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ponywgfxcjuwtmnlsnvmgmmmshggmdcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090083.770803-755-104655422053216/AnsiballZ_stat.py'
Oct 10 09:54:44 compute-0 sudo[146292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:44 compute-0 python3.9[146294]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:54:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:44 compute-0 sudo[146292]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:44 compute-0 sudo[146447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjoyekumnfqhxhvvtcgvvcvwjwlrkgoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090084.608847-779-184787012105886/AnsiballZ_command.py'
Oct 10 09:54:44 compute-0 sudo[146447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:45 compute-0 python3.9[146449]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:45 compute-0 sudo[146447]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:45.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:45 compute-0 ceph-mon[73551]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:54:45 compute-0 sudo[146602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjvwegqvbkbgxvumeylcowftetclrqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090085.3954022-803-157650865747846/AnsiballZ_file.py'
Oct 10 09:54:45 compute-0 sudo[146602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:45.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:45 compute-0 python3.9[146604]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:45 compute-0 sudo[146602]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:54:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:54:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:54:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:46.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:54:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:47.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:54:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:47.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:54:47 compute-0 python3.9[146756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:54:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:47] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:54:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:47 compute-0 ceph-mon[73551]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095447 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:54:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:48 compute-0 sudo[146908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvrwcisdxgbtomqiqyuchosrxgvceuxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090087.9191768-923-185088913579818/AnsiballZ_command.py'
Oct 10 09:54:48 compute-0 sudo[146908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:48 compute-0 python3.9[146910]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 09:54:48 compute-0 ovs-vsctl[146911]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 10 09:54:48 compute-0 sudo[146908]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:49 compute-0 sudo[147062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyiexbufqvjagkrwgnisebuzupoxkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090088.784889-950-33278348215884/AnsiballZ_command.py'
Oct 10 09:54:49 compute-0 sudo[147062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:49 compute-0 python3.9[147064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:49 compute-0 sudo[147062]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:49 compute-0 ceph-mon[73551]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 09:54:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:49 compute-0 sudo[147217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwcgefwhgwdklfjfybvgmfxfmqapkjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090089.6196733-974-238155664058891/AnsiballZ_command.py'
Oct 10 09:54:49 compute-0 sudo[147217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:50 compute-0 python3.9[147219]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:50 compute-0 ovs-vsctl[147221]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 10 09:54:50 compute-0 sudo[147217]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:51 compute-0 python3.9[147372]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:54:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:51 compute-0 ceph-mon[73551]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:51 compute-0 sudo[147524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkngtpbcgaypezgvgwppwpsrcerqeghn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090091.3537698-1025-87037013126566/AnsiballZ_file.py'
Oct 10 09:54:51 compute-0 sudo[147524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:51.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:51 compute-0 python3.9[147526]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:51 compute-0 sudo[147524]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:51 compute-0 sudo[147527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:51 compute-0 sudo[147527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:51 compute-0 sudo[147527]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:52 compute-0 sudo[147702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpxzlbxvzdabntpzipxnggrmsoxxnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090092.1817741-1049-130420490512413/AnsiballZ_stat.py'
Oct 10 09:54:52 compute-0 sudo[147702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:52 compute-0 python3.9[147704]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:52 compute-0 sudo[147702]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:52 compute-0 sudo[147781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoopyrhrhvbsybvjmquznigrxolznoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090092.1817741-1049-130420490512413/AnsiballZ_file.py'
Oct 10 09:54:52 compute-0 sudo[147781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:53 compute-0 python3.9[147783]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:53 compute-0 sudo[147781]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:54:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7052 writes, 1226 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 20.45 MB, 0.03 MB/s
                                           Interval WAL: 7052 writes, 1226 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 09:54:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:53 compute-0 ceph-mon[73551]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:53 compute-0 sudo[147933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmtyajvxuhnjwhxyrjbxefsqwuotmuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090093.3586226-1049-173072021379124/AnsiballZ_stat.py'
Oct 10 09:54:53 compute-0 sudo[147933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:53 compute-0 python3.9[147935]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:53 compute-0 sudo[147933]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:54 compute-0 sudo[148012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dohfwerklirrdofuzittkbzcvkqhhynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090093.3586226-1049-173072021379124/AnsiballZ_file.py'
Oct 10 09:54:54 compute-0 sudo[148012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:54 compute-0 python3.9[148014]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:54 compute-0 sudo[148012]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:54 compute-0 sudo[148165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cybkwsmbekjmuvjarnhwyflvgafwbffk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090094.6739264-1118-120176327019205/AnsiballZ_file.py'
Oct 10 09:54:54 compute-0 sudo[148165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:55 compute-0 python3.9[148167]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:55 compute-0 sudo[148165]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:55 compute-0 ceph-mon[73551]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:55 compute-0 sudo[148317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przmkyruvfptmfnjvfzferklhkhnlbht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090095.4561749-1142-50959550121687/AnsiballZ_stat.py'
Oct 10 09:54:55 compute-0 sudo[148317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:55.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:55 compute-0 python3.9[148319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:55 compute-0 sudo[148317]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:56 compute-0 sudo[148396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xclphxmampbdedzyfknmhjtdigmkyeqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090095.4561749-1142-50959550121687/AnsiballZ_file.py'
Oct 10 09:54:56 compute-0 sudo[148396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:56 compute-0 python3.9[148398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:56 compute-0 sudo[148396]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:56 compute-0 sudo[148549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skognfgcnjnpuhwrjqbbsytalnwfrhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090096.6186333-1178-7107127447275/AnsiballZ_stat.py'
Oct 10 09:54:56 compute-0 sudo[148549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:54:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:54:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:54:57.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:54:57 compute-0 python3.9[148551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:57 compute-0 sudo[148549]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:54:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:54:57] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:54:57 compute-0 sudo[148627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejvddszxainzpjnscmvzdxqumvxvtxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090096.6186333-1178-7107127447275/AnsiballZ_file.py'
Oct 10 09:54:57 compute-0 sudo[148627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:57 compute-0 python3.9[148629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:57 compute-0 sudo[148627]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:57 compute-0 ceph-mon[73551]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:58 compute-0 sudo[148780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmxrnxdqgtqupbqztizbbmcnxcoiysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090097.9461203-1214-181346750834506/AnsiballZ_systemd.py'
Oct 10 09:54:58 compute-0 sudo[148780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:58 compute-0 python3.9[148782]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:54:58 compute-0 systemd[1]: Reloading.
Oct 10 09:54:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:58 compute-0 systemd-rc-local-generator[148810]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:54:58 compute-0 systemd-sysv-generator[148814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:54:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:59 compute-0 sudo[148780]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:54:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:59 compute-0 sudo[148971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfwxjmljxwrcxcorkaeesejkkfjhkis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090099.2445176-1238-276604967235795/AnsiballZ_stat.py'
Oct 10 09:54:59 compute-0 sudo[148971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:59.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:59 compute-0 ceph-mon[73551]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:54:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:54:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:54:59 compute-0 python3.9[148973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:59 compute-0 sudo[148971]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:00 compute-0 sudo[149050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmwekgjtzwvltezvcchujzgqivvojlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090099.2445176-1238-276604967235795/AnsiballZ_file.py'
Oct 10 09:55:00 compute-0 sudo[149050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:00 compute-0 python3.9[149052]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:00 compute-0 sudo[149050]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:00 compute-0 sudo[149203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvrahyyrphootglcdbuyinidsbtgqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090100.6137834-1274-105052908206374/AnsiballZ_stat.py'
Oct 10 09:55:00 compute-0 sudo[149203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:01 compute-0 python3.9[149205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:01 compute-0 sudo[149203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:55:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:01 compute-0 sudo[149281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sliuhiqafiebhdgbzmiaswuidbctehoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090100.6137834-1274-105052908206374/AnsiballZ_file.py'
Oct 10 09:55:01 compute-0 sudo[149281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:01 compute-0 python3.9[149283]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:01 compute-0 sudo[149281]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000037s ======
Oct 10 09:55:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Oct 10 09:55:01 compute-0 ceph-mon[73551]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:02 compute-0 sudo[149434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhmdwbqqgovjihxmnxvqaeuieiusbwxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090101.8551793-1310-192516421845876/AnsiballZ_systemd.py'
Oct 10 09:55:02 compute-0 sudo[149434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:02 compute-0 python3.9[149436]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:55:02 compute-0 systemd[1]: Reloading.
Oct 10 09:55:02 compute-0 systemd-rc-local-generator[149463]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:02 compute-0 systemd-sysv-generator[149467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:02 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 09:55:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:55:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:55:02 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 09:55:02 compute-0 sudo[149434]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:03 compute-0 sudo[149628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlijacuuazgridkqqfxokboosxpwcopt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090103.28436-1340-82558914334783/AnsiballZ_file.py'
Oct 10 09:55:03 compute-0 sudo[149628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:03 compute-0 ceph-mon[73551]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:03 compute-0 python3.9[149630]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:03 compute-0 sudo[149628]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:04 compute-0 sudo[149781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmggwqbblkmffabmbgbmfqisvxnoqaql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090104.040832-1364-179396828045045/AnsiballZ_stat.py'
Oct 10 09:55:04 compute-0 sudo[149781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:04 compute-0 python3.9[149783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:04 compute-0 sudo[149781]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:05 compute-0 sudo[149905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsemyzcflmksljcesvhcqpufjeukqrwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090104.040832-1364-179396828045045/AnsiballZ_copy.py'
Oct 10 09:55:05 compute-0 sudo[149905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:05 compute-0 python3.9[149907]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090104.040832-1364-179396828045045/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:05 compute-0 sudo[149905]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:05.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:05 compute-0 ceph-mon[73551]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:06 compute-0 sudo[150058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziqhzkyckpismdcqbkisatiavacfhmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090105.8851626-1415-231239750729491/AnsiballZ_file.py'
Oct 10 09:55:06 compute-0 sudo[150058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:06 compute-0 python3.9[150060]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:06 compute-0 sudo[150058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:07 compute-0 sudo[150211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehqirddmxrijsxfmclaacucdohahecf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090106.7238805-1439-116011177901075/AnsiballZ_stat.py'
Oct 10 09:55:07 compute-0 sudo[150211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:07 compute-0 python3.9[150213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:07 compute-0 sudo[150211]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:07] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:55:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:07] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 10 09:55:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:07 compute-0 sudo[150334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsjwrurdlerddvnysdawzbqechxuvgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090106.7238805-1439-116011177901075/AnsiballZ_copy.py'
Oct 10 09:55:07 compute-0 sudo[150334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:07 compute-0 ceph-mon[73551]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:07 compute-0 python3.9[150336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090106.7238805-1439-116011177901075/.source.json _original_basename=.p2hjc5s6 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:07 compute-0 sudo[150334]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:08 compute-0 sudo[150487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyoaubosamzykxgzstjhrdejyywpfepm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090108.2377377-1484-277273996157765/AnsiballZ_file.py'
Oct 10 09:55:08 compute-0 sudo[150487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:08 compute-0 python3.9[150489]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:08 compute-0 sudo[150487]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:09 compute-0 sudo[150640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwxpmqtgudyshuvmegnbnmloqpjhxmkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090109.1225688-1508-86977530244566/AnsiballZ_stat.py'
Oct 10 09:55:09 compute-0 sudo[150640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000072s ======
Oct 10 09:55:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:09.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000072s
Oct 10 09:55:09 compute-0 sudo[150640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:09 compute-0 ceph-mon[73551]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:10 compute-0 sudo[150763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkazzcktulhhhvxfbqeomluyozjjvpge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090109.1225688-1508-86977530244566/AnsiballZ_copy.py'
Oct 10 09:55:10 compute-0 sudo[150763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:10 compute-0 sudo[150763]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:11 compute-0 sudo[150919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifudcpfizkwzfsrbufzxtqqfsjqznec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090110.7026196-1559-259907997068876/AnsiballZ_container_config_data.py'
Oct 10 09:55:11 compute-0 sudo[150919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:11 compute-0 python3.9[150921]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 10 09:55:11 compute-0 sudo[150919]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:11.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:11 compute-0 ceph-mon[73551]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:11.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:12 compute-0 sudo[151003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:12 compute-0 sudo[151003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:12 compute-0 sudo[151003]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:12 compute-0 sudo[151097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thhxqoieympbslztouryxoskqowgwlsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090111.6720624-1586-226577834646777/AnsiballZ_container_config_hash.py'
Oct 10 09:55:12 compute-0 sudo[151097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:12 compute-0 python3.9[151099]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 09:55:12 compute-0 sudo[151097]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:13 compute-0 sudo[151250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitksjxcwzdbajusfdicdcxneopkczry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090112.727513-1613-236926429932177/AnsiballZ_podman_container_info.py'
Oct 10 09:55:13 compute-0 sudo[151250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:13 compute-0 python3.9[151252]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 09:55:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:13 compute-0 sudo[151250]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:13 compute-0 ceph-mon[73551]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:15 compute-0 sudo[151431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mymwaeqgyrlpjyggmwmyxgdtkxkzngiz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090114.5027993-1652-152132153420633/AnsiballZ_edpm_container_manage.py'
Oct 10 09:55:15 compute-0 sudo[151431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:15 compute-0 python3[151433]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 09:55:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:15.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:15 compute-0 ceph-mon[73551]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:55:16
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.nfs', 'backups', 'vms']
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:55:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:55:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:55:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:55:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:55:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:17] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:55:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:17.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:17.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:17 compute-0 ceph-mon[73551]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:18 compute-0 ceph-mon[73551]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000036s ======
Oct 10 09:55:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:19.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Oct 10 09:55:20 compute-0 sudo[151532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:55:20 compute-0 sudo[151532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:20 compute-0 sudo[151532]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:20 compute-0 sudo[151572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:55:20 compute-0 sudo[151572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:20 compute-0 podman[151446]: 2025-10-10 09:55:20.41803875 +0000 UTC m=+5.015893835 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:55:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:55:20 compute-0 podman[151631]: 2025-10-10 09:55:20.573941794 +0000 UTC m=+0.052179576 container create be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 10 09:55:20 compute-0 podman[151631]: 2025-10-10 09:55:20.549688538 +0000 UTC m=+0.027926340 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:20 compute-0 python3[151433]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:20 compute-0 sudo[151431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:20 compute-0 sudo[151572]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:55:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-0 sudo[151714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:55:21 compute-0 sudo[151714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:21 compute-0 sudo[151714]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:21 compute-0 ceph-mon[73551]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:55:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:21 compute-0 sudo[151739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:55:21 compute-0 sudo[151739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.106908377 +0000 UTC m=+0.044987150 container create 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:55:22 compute-0 systemd[1]: Started libpod-conmon-24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32.scope.
Oct 10 09:55:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.0874278 +0000 UTC m=+0.025506623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.194136346 +0000 UTC m=+0.132215129 container init 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.20351011 +0000 UTC m=+0.141588883 container start 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.207262465 +0000 UTC m=+0.145341288 container attach 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 09:55:22 compute-0 nice_fermat[151823]: 167 167
Oct 10 09:55:22 compute-0 systemd[1]: libpod-24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32.scope: Deactivated successfully.
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.211154963 +0000 UTC m=+0.149233756 container died 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dae6619b55b07f2490e6a4d5cbed8018e4b57d37947149e0181fdb2918e6b4a-merged.mount: Deactivated successfully.
Oct 10 09:55:22 compute-0 podman[151807]: 2025-10-10 09:55:22.253829579 +0000 UTC m=+0.191908362 container remove 24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_fermat, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:55:22 compute-0 systemd[1]: libpod-conmon-24869dbc85079e7c83886a9f6ccbe300bf2a0715e9992d5456d27d1907354f32.scope: Deactivated successfully.
Oct 10 09:55:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:22 compute-0 podman[151846]: 2025-10-10 09:55:22.47057846 +0000 UTC m=+0.080409656 container create 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:55:22 compute-0 systemd[1]: Started libpod-conmon-52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf.scope.
Oct 10 09:55:22 compute-0 podman[151846]: 2025-10-10 09:55:22.442971313 +0000 UTC m=+0.052802519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:22 compute-0 podman[151846]: 2025-10-10 09:55:22.571671024 +0000 UTC m=+0.181502230 container init 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:55:22 compute-0 podman[151846]: 2025-10-10 09:55:22.588269628 +0000 UTC m=+0.198100824 container start 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 09:55:22 compute-0 podman[151846]: 2025-10-10 09:55:22.593443643 +0000 UTC m=+0.203274849 container attach 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:55:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:22 compute-0 sudo[152000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlbyrweftrsyljcsmmjcaryoxzewwclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090122.5830295-1676-231859709017874/AnsiballZ_stat.py'
Oct 10 09:55:22 compute-0 sudo[152000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:22 compute-0 elastic_yonath[151863]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:55:22 compute-0 elastic_yonath[151863]: --> All data devices are unavailable
Oct 10 09:55:22 compute-0 systemd[1]: libpod-52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf.scope: Deactivated successfully.
Oct 10 09:55:23 compute-0 podman[152007]: 2025-10-10 09:55:23.043980752 +0000 UTC m=+0.045073233 container died 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 09:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-83dd0e553495313e64af3ea5665ea6ba7356b2c5e5f97d5c1203231ebcdbc50e-merged.mount: Deactivated successfully.
Oct 10 09:55:23 compute-0 podman[152007]: 2025-10-10 09:55:23.084242122 +0000 UTC m=+0.085334583 container remove 52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 09:55:23 compute-0 systemd[1]: libpod-conmon-52606d9764983f47222d83b97e1e89b9fba4bdc3501f58ada67b77b7b3ac0cdf.scope: Deactivated successfully.
Oct 10 09:55:23 compute-0 sudo[151739]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:23 compute-0 python3.9[152003]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:55:23 compute-0 sudo[152000]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:23 compute-0 sudo[152024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:55:23 compute-0 sudo[152024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:23 compute-0 sudo[152024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:23 compute-0 sudo[152072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:55:23 compute-0 sudo[152072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:23 compute-0 ceph-mon[73551]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:23 compute-0 sudo[152272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjktxdnharnyslwsoywdtxzbhdgbpejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090123.439335-1703-104175004471467/AnsiballZ_file.py'
Oct 10 09:55:23 compute-0 sudo[152272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.726471444 +0000 UTC m=+0.032693950 container create ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 09:55:23 compute-0 systemd[1]: Started libpod-conmon-ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f.scope.
Oct 10 09:55:23 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.712287497 +0000 UTC m=+0.018510023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.8197526 +0000 UTC m=+0.125975106 container init ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.829749827 +0000 UTC m=+0.135972333 container start ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.833430369 +0000 UTC m=+0.139652895 container attach ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:55:23 compute-0 frosty_grothendieck[152285]: 167 167
Oct 10 09:55:23 compute-0 systemd[1]: libpod-ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f.scope: Deactivated successfully.
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.839189065 +0000 UTC m=+0.145411591 container died ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-498f36d473a24fbf8a886dc50c5cf527f14c4a86640db24661b9fd20aacc71e1-merged.mount: Deactivated successfully.
Oct 10 09:55:23 compute-0 podman[152260]: 2025-10-10 09:55:23.876073154 +0000 UTC m=+0.182295660 container remove ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_grothendieck, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:55:23 compute-0 systemd[1]: libpod-conmon-ee67f557cc37a4ac2c2de8acc5cbde2eb2e4aa1f0eb0c3e31f56560589a60d1f.scope: Deactivated successfully.
Oct 10 09:55:23 compute-0 python3.9[152281]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:23 compute-0 sudo[152272]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.022445306 +0000 UTC m=+0.042970136 container create a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 09:55:24 compute-0 systemd[1]: Started libpod-conmon-a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1.scope.
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.004659311 +0000 UTC m=+0.025184171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7243157aec7e5812a3abe7dc8cdd8f7a8921b6ef7e2eb3fb8703c845af09ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7243157aec7e5812a3abe7dc8cdd8f7a8921b6ef7e2eb3fb8703c845af09ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7243157aec7e5812a3abe7dc8cdd8f7a8921b6ef7e2eb3fb8703c845af09ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7243157aec7e5812a3abe7dc8cdd8f7a8921b6ef7e2eb3fb8703c845af09ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.116721138 +0000 UTC m=+0.137245998 container init a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.126480736 +0000 UTC m=+0.147005586 container start a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.129971281 +0000 UTC m=+0.150496121 container attach a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 09:55:24 compute-0 sudo[152404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epjsyautccxvuufzafrmigxaaqvniupb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090123.439335-1703-104175004471467/AnsiballZ_stat.py'
Oct 10 09:55:24 compute-0 sudo[152404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:24 compute-0 python3.9[152406]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:55:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:24 compute-0 sudo[152404]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:24 compute-0 youthful_chaum[152370]: {
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:     "0": [
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:         {
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "devices": [
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "/dev/loop3"
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             ],
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "lv_name": "ceph_lv0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "lv_size": "21470642176",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "name": "ceph_lv0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "tags": {
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.cluster_name": "ceph",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.crush_device_class": "",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.encrypted": "0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.osd_id": "0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.type": "block",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.vdo": "0",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:                 "ceph.with_tpm": "0"
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             },
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "type": "block",
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:             "vg_name": "ceph_vg0"
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:         }
Oct 10 09:55:24 compute-0 youthful_chaum[152370]:     ]
Oct 10 09:55:24 compute-0 youthful_chaum[152370]: }
Oct 10 09:55:24 compute-0 systemd[1]: libpod-a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1.scope: Deactivated successfully.
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.488995232 +0000 UTC m=+0.509520082 container died a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae7243157aec7e5812a3abe7dc8cdd8f7a8921b6ef7e2eb3fb8703c845af09ef-merged.mount: Deactivated successfully.
Oct 10 09:55:24 compute-0 podman[152313]: 2025-10-10 09:55:24.536899775 +0000 UTC m=+0.557424615 container remove a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaum, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:55:24 compute-0 systemd[1]: libpod-conmon-a8e45030fde9c116004e67c03a0ca9a92ee46882a6d223e9e7814a6851d3d8d1.scope: Deactivated successfully.
Oct 10 09:55:24 compute-0 sudo[152072]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:24 compute-0 sudo[152474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:55:24 compute-0 sudo[152474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:24 compute-0 sudo[152474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:24 compute-0 sudo[152499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:55:24 compute-0 sudo[152499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:24 compute-0 sudo[152621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqirelwjwiytagflsygzigqqltsqhdga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.4815114-1703-64792756463001/AnsiballZ_copy.py'
Oct 10 09:55:24 compute-0 sudo[152621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:25 compute-0 python3.9[152623]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090124.4815114-1703-64792756463001/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:25 compute-0 sudo[152621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.161207609 +0000 UTC m=+0.046136826 container create ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 09:55:25 compute-0 systemd[1]: Started libpod-conmon-ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795.scope.
Oct 10 09:55:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.140577197 +0000 UTC m=+0.025506404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.254938405 +0000 UTC m=+0.139867632 container init ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:55:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.262574267 +0000 UTC m=+0.147503454 container start ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.266267579 +0000 UTC m=+0.151196766 container attach ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:55:25 compute-0 systemd[1]: libpod-ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795.scope: Deactivated successfully.
Oct 10 09:55:25 compute-0 vigilant_goodall[152695]: 167 167
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.271194242 +0000 UTC m=+0.156123419 container died ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:55:25 compute-0 conmon[152695]: conmon ff325700c740f569c435 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795.scope/container/memory.events
Oct 10 09:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-64691031e2056ad8f07d04e6c70ade0b22148042695923f0d51ab07cea123d5e-merged.mount: Deactivated successfully.
Oct 10 09:55:25 compute-0 podman[152666]: 2025-10-10 09:55:25.311601337 +0000 UTC m=+0.196530514 container remove ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:55:25 compute-0 systemd[1]: libpod-conmon-ff325700c740f569c43516448858d5629adea091b8b6ffd7b8ad4d409949a795.scope: Deactivated successfully.
Oct 10 09:55:25 compute-0 sudo[152774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqixbrsdtbntjwvnanzxvitfexaduumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.4815114-1703-64792756463001/AnsiballZ_systemd.py'
Oct 10 09:55:25 compute-0 sudo[152774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:25 compute-0 podman[152781]: 2025-10-10 09:55:25.56355645 +0000 UTC m=+0.114873096 container create 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:55:25 compute-0 podman[152781]: 2025-10-10 09:55:25.486989 +0000 UTC m=+0.038305626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:55:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 09:55:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:25.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 09:55:25 compute-0 systemd[1]: Started libpod-conmon-2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c.scope.
Oct 10 09:55:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f19e8adf718728ea2499115ddfb118680091d7237eed020c353eeecd0b412a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f19e8adf718728ea2499115ddfb118680091d7237eed020c353eeecd0b412a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f19e8adf718728ea2499115ddfb118680091d7237eed020c353eeecd0b412a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f19e8adf718728ea2499115ddfb118680091d7237eed020c353eeecd0b412a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:25 compute-0 podman[152781]: 2025-10-10 09:55:25.668015621 +0000 UTC m=+0.219332317 container init 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:55:25 compute-0 podman[152781]: 2025-10-10 09:55:25.687312138 +0000 UTC m=+0.238628754 container start 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:55:25 compute-0 podman[152781]: 2025-10-10 09:55:25.691552308 +0000 UTC m=+0.242868944 container attach 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:55:25 compute-0 ceph-mon[73551]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:25 compute-0 python3.9[152783]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:55:25 compute-0 systemd[1]: Reloading.
Oct 10 09:55:25 compute-0 systemd-rc-local-generator[152833]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:25 compute-0 systemd-sysv-generator[152840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:26 compute-0 sudo[152774]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:26 compute-0 lvm[152930]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:55:26 compute-0 lvm[152930]: VG ceph_vg0 finished
Oct 10 09:55:26 compute-0 flamboyant_goldwasser[152798]: {}
Oct 10 09:55:26 compute-0 systemd[1]: libpod-2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c.scope: Deactivated successfully.
Oct 10 09:55:26 compute-0 systemd[1]: libpod-2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c.scope: Consumed 1.211s CPU time.
Oct 10 09:55:26 compute-0 podman[152781]: 2025-10-10 09:55:26.50740924 +0000 UTC m=+1.058725846 container died 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f19e8adf718728ea2499115ddfb118680091d7237eed020c353eeecd0b412a-merged.mount: Deactivated successfully.
Oct 10 09:55:26 compute-0 sudo[152985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kukisdombjbhwwguyslbenbozrqlzpxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.4815114-1703-64792756463001/AnsiballZ_systemd.py'
Oct 10 09:55:26 compute-0 sudo[152985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:26 compute-0 podman[152781]: 2025-10-10 09:55:26.54770155 +0000 UTC m=+1.099018156 container remove 2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:55:26 compute-0 systemd[1]: libpod-conmon-2fb733c0ea78767c39c2568e3acf0da72ff3b9844389a105458563f038be175c.scope: Deactivated successfully.
Oct 10 09:55:26 compute-0 sudo[152499]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:55:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:55:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:26 compute-0 sudo[152997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:55:26 compute-0 sudo[152997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:26 compute-0 sudo[152997]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:26 compute-0 python3.9[152996]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:55:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:26 compute-0 systemd[1]: Reloading.
Oct 10 09:55:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:27 compute-0 systemd-rc-local-generator[153053]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:27 compute-0 systemd-sysv-generator[153056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:27 compute-0 systemd[1]: Starting ovn_controller container...
Oct 10 09:55:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bd075b15c2ab27d128f0ac8cadc26474309f1731d8eaacc63b389ccd38a9ab/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:55:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:27] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 10 09:55:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb.
Oct 10 09:55:27 compute-0 podman[153064]: 2025-10-10 09:55:27.408338753 +0000 UTC m=+0.130608601 container init be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + sudo -E kolla_set_configs
Oct 10 09:55:27 compute-0 podman[153064]: 2025-10-10 09:55:27.439923208 +0000 UTC m=+0.162193036 container start be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 10 09:55:27 compute-0 edpm-start-podman-container[153064]: ovn_controller
Oct 10 09:55:27 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 10 09:55:27 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 09:55:27 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 09:55:27 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 10 09:55:27 compute-0 edpm-start-podman-container[153063]: Creating additional drop-in dependency for "ovn_controller" (be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb)
Oct 10 09:55:27 compute-0 systemd[153120]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 10 09:55:27 compute-0 podman[153087]: 2025-10-10 09:55:27.525721017 +0000 UTC m=+0.074404423 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 09:55:27 compute-0 systemd[1]: be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb-514d1092b88d88ae.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 09:55:27 compute-0 systemd[1]: be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb-514d1092b88d88ae.service: Failed with result 'exit-code'.
Oct 10 09:55:27 compute-0 systemd[1]: Reloading.
Oct 10 09:55:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:55:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:27.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:55:27 compute-0 ceph-mon[73551]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:27 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:27 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:27 compute-0 systemd-rc-local-generator[153163]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:27 compute-0 systemd-sysv-generator[153167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:27 compute-0 systemd[153120]: Queued start job for default target Main User Target.
Oct 10 09:55:27 compute-0 systemd[153120]: Created slice User Application Slice.
Oct 10 09:55:27 compute-0 systemd[153120]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 09:55:27 compute-0 systemd[153120]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:55:27 compute-0 systemd[153120]: Reached target Paths.
Oct 10 09:55:27 compute-0 systemd[153120]: Reached target Timers.
Oct 10 09:55:27 compute-0 systemd[153120]: Starting D-Bus User Message Bus Socket...
Oct 10 09:55:27 compute-0 systemd[153120]: Starting Create User's Volatile Files and Directories...
Oct 10 09:55:27 compute-0 systemd[153120]: Finished Create User's Volatile Files and Directories.
Oct 10 09:55:27 compute-0 systemd[153120]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:55:27 compute-0 systemd[153120]: Reached target Sockets.
Oct 10 09:55:27 compute-0 systemd[153120]: Reached target Basic System.
Oct 10 09:55:27 compute-0 systemd[153120]: Reached target Main User Target.
Oct 10 09:55:27 compute-0 systemd[153120]: Startup finished in 164ms.
Oct 10 09:55:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:55:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:27.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:55:27 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 10 09:55:27 compute-0 systemd[1]: Started ovn_controller container.
Oct 10 09:55:27 compute-0 systemd[1]: Started Session c1 of User root.
Oct 10 09:55:27 compute-0 sudo[152985]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:27 compute-0 ovn_controller[153080]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 09:55:27 compute-0 ovn_controller[153080]: INFO:__main__:Validating config file
Oct 10 09:55:27 compute-0 ovn_controller[153080]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 09:55:27 compute-0 ovn_controller[153080]: INFO:__main__:Writing out command to execute
Oct 10 09:55:27 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 10 09:55:27 compute-0 ovn_controller[153080]: ++ cat /run_command
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + ARGS=
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + sudo kolla_copy_cacerts
Oct 10 09:55:27 compute-0 systemd[1]: Started Session c2 of User root.
Oct 10 09:55:27 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + [[ ! -n '' ]]
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + . kolla_extend_start
Oct 10 09:55:27 compute-0 ovn_controller[153080]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + umask 0022
Oct 10 09:55:27 compute-0 ovn_controller[153080]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0160] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0170] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0181] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0188] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0192] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 09:55:28 compute-0 kernel: br-int: entered promiscuous mode
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:55:28 compute-0 systemd-udevd[152956]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-0 ovn_controller[153080]: 2025-10-10T09:55:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0363] manager: (ovn-38ab03-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0374] manager: (ovn-49146e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 10 09:55:28 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0532] device (genev_sys_6081): carrier: link connected
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.0536] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct 10 09:55:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:28 compute-0 NetworkManager[44849]: <info>  [1760090128.5649] manager: (ovn-ee0899-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 10 09:55:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:28 compute-0 sudo[153344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbnuauwbefwgfadccmnykubiwwokdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090128.5421402-1787-62746310249237/AnsiballZ_command.py'
Oct 10 09:55:28 compute-0 sudo[153344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:29 compute-0 python3.9[153346]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:29 compute-0 ovs-vsctl[153347]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 10 09:55:29 compute-0 sudo[153344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:29.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:29 compute-0 ceph-mon[73551]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:29 compute-0 sudo[153497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqjzjwaedzonjsdbhomiluxephkvptm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090129.400927-1811-94818586043508/AnsiballZ_command.py'
Oct 10 09:55:29 compute-0 sudo[153497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:29 compute-0 python3.9[153499]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:29 compute-0 ovs-vsctl[153501]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 10 09:55:30 compute-0 sudo[153497]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:30 compute-0 sudo[153654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmyetjklfiwdcdlncjgyfsrxwmnqqerb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090130.4954479-1853-138950346974375/AnsiballZ_command.py'
Oct 10 09:55:30 compute-0 sudo[153654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:30 compute-0 python3.9[153656]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:31 compute-0 ovs-vsctl[153657]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 10 09:55:31 compute-0 sudo[153654]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:55:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:31 compute-0 sshd-session[140864]: Connection closed by 192.168.122.30 port 59414
Oct 10 09:55:31 compute-0 sshd-session[140861]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:55:31 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 10 09:55:31 compute-0 systemd[1]: session-51.scope: Consumed 1min 1.622s CPU time.
Oct 10 09:55:31 compute-0 systemd-logind[806]: Session 51 logged out. Waiting for processes to exit.
Oct 10 09:55:31 compute-0 systemd-logind[806]: Removed session 51.
Oct 10 09:55:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:31.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:31 compute-0 ceph-mon[73551]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:32 compute-0 sudo[153688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:32 compute-0 sudo[153688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:32 compute-0 sudo[153688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:33 compute-0 ceph-mon[73551]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:35 compute-0 ceph-mon[73551]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:35.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:36 compute-0 sshd-session[153717]: Accepted publickey for zuul from 192.168.122.30 port 34002 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:55:36 compute-0 systemd-logind[806]: New session 53 of user zuul.
Oct 10 09:55:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:36 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 10 09:55:36 compute-0 sshd-session[153717]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:55:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:37] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:55:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:37] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:55:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:37 compute-0 python3.9[153871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:55:37 compute-0 ceph-mon[73551]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:38 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 10 09:55:38 compute-0 systemd[153120]: Activating special unit Exit the Session...
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped target Main User Target.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped target Basic System.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped target Paths.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped target Sockets.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped target Timers.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 09:55:38 compute-0 systemd[153120]: Closed D-Bus User Message Bus Socket.
Oct 10 09:55:38 compute-0 systemd[153120]: Stopped Create User's Volatile Files and Directories.
Oct 10 09:55:38 compute-0 systemd[153120]: Removed slice User Application Slice.
Oct 10 09:55:38 compute-0 systemd[153120]: Reached target Shutdown.
Oct 10 09:55:38 compute-0 systemd[153120]: Finished Exit the Session.
Oct 10 09:55:38 compute-0 systemd[153120]: Reached target Exit the Session.
Oct 10 09:55:38 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 09:55:38 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 10 09:55:38 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 09:55:38 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 09:55:38 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 09:55:38 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 09:55:38 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 09:55:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:38 compute-0 sudo[154028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opiacledmzorymnelqhwlvoclotyjcse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090138.446714-62-210935676406345/AnsiballZ_file.py'
Oct 10 09:55:38 compute-0 sudo[154028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:39 compute-0 python3.9[154030]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:39 compute-0 sudo[154028]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:39 compute-0 sudo[154180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krghzvlirhjpiifysyqyitqcxmwtupyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090139.3107162-62-76748350533534/AnsiballZ_file.py'
Oct 10 09:55:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:39 compute-0 sudo[154180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:39 compute-0 ceph-mon[73551]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:39 compute-0 python3.9[154182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:39 compute-0 sudo[154180]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:40 compute-0 sudo[154333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzohddvowcuutivqieslmymmrlspzacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090140.0432007-62-171332014771949/AnsiballZ_file.py'
Oct 10 09:55:40 compute-0 sudo[154333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:40 compute-0 python3.9[154335]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:40 compute-0 sudo[154333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:40 compute-0 sudo[154486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fetgkmhrmjljhbemwmugcsiuxhnzcmpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090140.7198524-62-85057281089430/AnsiballZ_file.py'
Oct 10 09:55:40 compute-0 sudo[154486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:41 compute-0 python3.9[154488]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:41 compute-0 sudo[154486]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:41 compute-0 sudo[154638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvucsmhttpqivbkqysppdeavwovtcas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090141.4100618-62-244347681628434/AnsiballZ_file.py'
Oct 10 09:55:41 compute-0 sudo[154638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:41 compute-0 ceph-mon[73551]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:41 compute-0 python3.9[154640]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:41 compute-0 sudo[154638]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:43 compute-0 python3.9[154793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:55:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:43 compute-0 ceph-mon[73551]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:44 compute-0 sudo[154944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyfewaabgkesvdrrowfaycgspwdezti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090143.731335-194-239378260290266/AnsiballZ_seboolean.py'
Oct 10 09:55:44 compute-0 sudo[154944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:44 compute-0 python3.9[154946]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 09:55:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:45 compute-0 sudo[154944]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:55:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:55:46 compute-0 ceph-mon[73551]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:46 compute-0 python3.9[155097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:55:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:55:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:47.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:47 compute-0 ceph-mon[73551]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:47 compute-0 python3.9[155221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090145.4212496-218-239310478011350/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:55:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:55:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:47 compute-0 python3.9[155371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:47.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:48 compute-0 python3.9[155493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090147.2911203-263-127033932593383/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:49 compute-0 sudo[155644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seuslhcmhvmubnbrbpkykhdnkrbepwgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090149.1050093-314-171610057321584/AnsiballZ_setup.py'
Oct 10 09:55:49 compute-0 sudo[155644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:49 compute-0 ceph-mon[73551]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:49.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:49 compute-0 python3.9[155646]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:55:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:49.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:49 compute-0 sudo[155644]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:50 compute-0 sudo[155729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrwejdnbsabhszpbxerxvjjluuyvijl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090149.1050093-314-171610057321584/AnsiballZ_dnf.py'
Oct 10 09:55:50 compute-0 sudo[155729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:50 compute-0 python3.9[155731]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:55:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:51 compute-0 ceph-mon[73551]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:51.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:52 compute-0 sudo[155729]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:52 compute-0 sudo[155795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:52 compute-0 sudo[155795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:52 compute-0 sudo[155795]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:52 compute-0 sudo[155911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkjehtksiptsmiovndbhrhsluorjsqxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090152.2039654-350-188485980683071/AnsiballZ_systemd.py'
Oct 10 09:55:52 compute-0 sudo[155911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:53 compute-0 python3.9[155913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:55:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:53 compute-0 sudo[155911]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:53 compute-0 ceph-mon[73551]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:53.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:53 compute-0 python3.9[156066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:54 compute-0 python3.9[156188]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090153.499046-374-223936600202258/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:55 compute-0 python3.9[156339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:55 compute-0 ceph-mon[73551]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:55:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:55:55 compute-0 python3.9[156460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090154.7752206-374-261876081570699/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:57.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:55:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:55:57.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:55:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:57 compute-0 python3.9[156612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:57] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:55:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:55:57] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:55:57 compute-0 ceph-mon[73551]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:57 compute-0 ovn_controller[153080]: 2025-10-10T09:55:57Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Oct 10 09:55:57 compute-0 ovn_controller[153080]: 2025-10-10T09:55:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 10 09:55:57 compute-0 podman[156707]: 2025-10-10 09:55:57.741422455 +0000 UTC m=+0.117713916 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 09:55:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:57 compute-0 python3.9[156746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090156.8524985-506-37949175709230/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:55:58 compute-0 python3.9[156910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:59 compute-0 python3.9[157032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090157.991433-506-140906533289558/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:55:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:59 compute-0 ceph-mon[73551]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:55:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:59.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:55:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:55:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:00 compute-0 python3.9[157182]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:00 compute-0 sudo[157336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwqlltqrdyqxuuikgdjufkcsdrpsvie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090160.5734782-620-208275598616907/AnsiballZ_file.py'
Oct 10 09:56:00 compute-0 sudo[157336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:01 compute-0 python3.9[157338]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:01 compute-0 sudo[157336]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:56:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:01.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:01 compute-0 ceph-mon[73551]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:01 compute-0 sudo[157488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahjqmejttdhdldbcsvwcskimgmoiybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090161.4074004-644-8575346781918/AnsiballZ_stat.py'
Oct 10 09:56:01 compute-0 sudo[157488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095601 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:56:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:02 compute-0 python3.9[157490]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:02 compute-0 sudo[157488]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:02 compute-0 sudo[157567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtqjgdrtlwgplpohspbjlrogjxacbdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090161.4074004-644-8575346781918/AnsiballZ_file.py'
Oct 10 09:56:02 compute-0 sudo[157567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:02 compute-0 python3.9[157569]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:02 compute-0 sudo[157567]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:03 compute-0 sudo[157720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinhvodubibhsdcfofluictgztblcnhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090162.7345114-644-221737853133678/AnsiballZ_stat.py'
Oct 10 09:56:03 compute-0 sudo[157720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:03 compute-0 python3.9[157722]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:03 compute-0 sudo[157720]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:03 compute-0 sudo[157798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkfmicwpdnstlynmmnvyltchhthadqdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090162.7345114-644-221737853133678/AnsiballZ_file.py'
Oct 10 09:56:03 compute-0 sudo[157798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:03.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:03 compute-0 ceph-mon[73551]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:03 compute-0 python3.9[157800]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:03.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:03 compute-0 sudo[157798]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:04 compute-0 sudo[157951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfzdeexgwjmzzxikqtaajsehgsjtmdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.0506568-713-223592171804874/AnsiballZ_file.py'
Oct 10 09:56:04 compute-0 sudo[157951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:04 compute-0 python3.9[157953]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:04 compute-0 sudo[157951]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:05 compute-0 sudo[158104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okldxmyyqutxlznkomnflmhioernzvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.8730726-737-87414404686004/AnsiballZ_stat.py'
Oct 10 09:56:05 compute-0 sudo[158104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:05 compute-0 python3.9[158106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:05 compute-0 sudo[158104]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:05 compute-0 sudo[158182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfhwsuycejvocqtenbdqlpkulisnaqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.8730726-737-87414404686004/AnsiballZ_file.py'
Oct 10 09:56:05 compute-0 sudo[158182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:05 compute-0 ceph-mon[73551]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:05.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:05 compute-0 python3.9[158184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:05 compute-0 sudo[158182]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:06 compute-0 sudo[158335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvibbslddjrhxpebpxuoaitbebpkbqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090166.1839828-773-116841341314363/AnsiballZ_stat.py'
Oct 10 09:56:06 compute-0 sudo[158335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:06 compute-0 python3.9[158337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:06 compute-0 sudo[158335]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:06 compute-0 sudo[158414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmcfkpxxxsdqexbzwjguxxvqmtaiskm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090166.1839828-773-116841341314363/AnsiballZ_file.py'
Oct 10 09:56:06 compute-0 sudo[158414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:07.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:56:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:07.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:56:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:07.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:56:07 compute-0 python3.9[158416]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:07 compute-0 sudo[158414]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:56:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:07] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 10 09:56:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:07 compute-0 ceph-mon[73551]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:07 compute-0 sudo[158566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhmxfsihefansqlgxzusrossikfxefib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090167.5261428-809-248065035323390/AnsiballZ_systemd.py'
Oct 10 09:56:07 compute-0 sudo[158566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:07.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:08 compute-0 python3.9[158568]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:08 compute-0 systemd[1]: Reloading.
Oct 10 09:56:08 compute-0 systemd-rc-local-generator[158598]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:08 compute-0 systemd-sysv-generator[158602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:08 compute-0 sudo[158566]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:09 compute-0 sudo[158757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjufxcvlpsxcjnilozezendhaltqhjta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090168.836904-833-129139787735203/AnsiballZ_stat.py'
Oct 10 09:56:09 compute-0 sudo[158757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:09 compute-0 python3.9[158759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:09 compute-0 sudo[158757]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:09 compute-0 sudo[158835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvxxudfycfmtkswztnadflqckogzurb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090168.836904-833-129139787735203/AnsiballZ_file.py'
Oct 10 09:56:09 compute-0 sudo[158835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:09.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:09 compute-0 ceph-mon[73551]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:09 compute-0 python3.9[158837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:09 compute-0 sudo[158835]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:10 compute-0 sudo[158988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijmidewnjhilyxgrurgfwzgugbvqoawn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090170.1811407-869-133233008280513/AnsiballZ_stat.py'
Oct 10 09:56:10 compute-0 sudo[158988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:56:10 compute-0 python3.9[158990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:10 compute-0 sudo[158988]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:10 compute-0 sudo[159067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoriimieszpxsorhgtfbuejsnrqotqmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090170.1811407-869-133233008280513/AnsiballZ_file.py'
Oct 10 09:56:10 compute-0 sudo[159067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:11 compute-0 python3.9[159069]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:11 compute-0 sudo[159067]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:11 compute-0 ceph-mon[73551]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:11 compute-0 sudo[159219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvasxuinjdrxhbowwykxrmcapdyzqes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090171.5294375-905-117017750400933/AnsiballZ_systemd.py'
Oct 10 09:56:11 compute-0 sudo[159219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:12 compute-0 python3.9[159221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:12 compute-0 systemd[1]: Reloading.
Oct 10 09:56:12 compute-0 systemd-rc-local-generator[159250]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:12 compute-0 systemd-sysv-generator[159254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:12 compute-0 sudo[159258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:12 compute-0 sudo[159258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:12 compute-0 sudo[159258]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:12 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 09:56:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:56:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:56:12 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 09:56:12 compute-0 sudo[159219]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:13 compute-0 sudo[159439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taewdjojtavdlfsyfihlslugyhgmvrqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.03761-935-9078121977083/AnsiballZ_file.py'
Oct 10 09:56:13 compute-0 sudo[159439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:56:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:56:13 compute-0 python3.9[159441]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:13 compute-0 sudo[159439]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:13.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:13 compute-0 ceph-mon[73551]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:14 compute-0 sudo[159592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcblhilmtbmcuolprdsefknrcbetmruw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.850362-959-213538565271263/AnsiballZ_stat.py'
Oct 10 09:56:14 compute-0 sudo[159592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:14 compute-0 python3.9[159594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:14 compute-0 sudo[159592]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:14 compute-0 sudo[159715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bityqoyorqweboomvviqyntgfbpfgrzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.850362-959-213538565271263/AnsiballZ_copy.py'
Oct 10 09:56:14 compute-0 sudo[159715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:14 compute-0 python3.9[159717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090173.850362-959-213538565271263/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:14 compute-0 sudo[159715]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:15.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:15 compute-0 ceph-mon[73551]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:15.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:15 compute-0 sudo[159869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnyfxesgoyrvgpbhubwbmfxrkromxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090175.5668275-1010-93306135299110/AnsiballZ_file.py'
Oct 10 09:56:15 compute-0 sudo[159869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:16 compute-0 python3.9[159871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:16 compute-0 sudo[159869]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:56:16
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images', 'vms', 'volumes']
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:56:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:56:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:56:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:56:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:56:16 compute-0 sudo[160022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogpjbfoetnhkhtgajywlcygaxlfpivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090176.4057107-1034-56872195192681/AnsiballZ_stat.py'
Oct 10 09:56:16 compute-0 sudo[160022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:16 compute-0 python3.9[160025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:16 compute-0 sudo[160022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:17.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:56:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:56:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:17.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:56:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:17] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:17] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:17 compute-0 sudo[160146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwqsqioymvfarposvbyuiyzvfzzwnfze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090176.4057107-1034-56872195192681/AnsiballZ_copy.py'
Oct 10 09:56:17 compute-0 sudo[160146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:17 compute-0 python3.9[160148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090176.4057107-1034-56872195192681/.source.json _original_basename=.hq6s_9ub follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:17 compute-0 sudo[160146]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:17.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:17 compute-0 ceph-mon[73551]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:17.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:18 compute-0 sudo[160299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onzyeiejorhqdahtxtpwqxfqcsgbssqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090177.891416-1079-279936740012517/AnsiballZ_file.py'
Oct 10 09:56:18 compute-0 sudo[160299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:18 compute-0 python3.9[160301]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:18 compute-0 sudo[160299]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:19 compute-0 sudo[160452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqjabcdhtfefjmkgawmylwkmkbhxsepo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090178.8067505-1103-129134490640208/AnsiballZ_stat.py'
Oct 10 09:56:19 compute-0 sudo[160452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:19 compute-0 sudo[160452]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:19 compute-0 sudo[160575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjrttuiezbjtdaufcrccmfpmtxcfmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090178.8067505-1103-129134490640208/AnsiballZ_copy.py'
Oct 10 09:56:19 compute-0 sudo[160575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:19 compute-0 ceph-mon[73551]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:19 compute-0 sudo[160575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:20 compute-0 sudo[160729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byaelazycknbrzpwbmrhqgtdnoiffxrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090180.3715587-1154-123318569020552/AnsiballZ_container_config_data.py'
Oct 10 09:56:20 compute-0 sudo[160729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:21 compute-0 python3.9[160731]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 10 09:56:21 compute-0 sudo[160729]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:21.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095621 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 24ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:56:21 compute-0 sudo[160881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxsltnqgomkvefnhoivothcjhqcpoor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090181.3725755-1181-137366271871756/AnsiballZ_container_config_hash.py'
Oct 10 09:56:21 compute-0 sudo[160881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:21.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:21 compute-0 ceph-mon[73551]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:22 compute-0 python3.9[160883]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 09:56:22 compute-0 sudo[160881]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:22 compute-0 sudo[161036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aunfjtwhbdzqqobdlmeuvqcgvnjzybga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090182.479308-1208-184836616145682/AnsiballZ_podman_container_info.py'
Oct 10 09:56:22 compute-0 sudo[161036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:23 compute-0 python3.9[161038]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 09:56:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:23 compute-0 sudo[161036]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:23.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:23 compute-0 ceph-mon[73551]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:24 compute-0 sudo[161217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvghamwvdrnanzdyybsacmjalvpoxxlg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090184.2634125-1247-105316022674095/AnsiballZ_edpm_container_manage.py'
Oct 10 09:56:24 compute-0 sudo[161217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:25 compute-0 python3[161219]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 09:56:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:25.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:25 compute-0 ceph-mon[73551]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:27.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:56:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:27.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:56:27 compute-0 sudo[161286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:56:27 compute-0 sudo[161286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:27 compute-0 sudo[161286]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:27 compute-0 sudo[161311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:56:27 compute-0 sudo[161311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:27] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:27] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:27.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:28 compute-0 ceph-mon[73551]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:56:28 compute-0 podman[161350]: 2025-10-10 09:56:28.60578437 +0000 UTC m=+0.442071575 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:56:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:29 compute-0 ceph-mon[73551]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:56:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:29.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:56:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:31 compute-0 ceph-mon[73551]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:31.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:32 compute-0 sudo[161311]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:32 compute-0 sudo[161431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:32 compute-0 sudo[161431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:32 compute-0 sudo[161431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:56:32 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:56:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:56:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:56:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:56:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:56:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:56:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:56:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:56:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:56:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:33 compute-0 sudo[161474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:56:33 compute-0 sudo[161474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:33 compute-0 sudo[161474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:33 compute-0 sudo[161499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:56:33 compute-0 sudo[161499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:33 compute-0 podman[161234]: 2025-10-10 09:56:33.488037857 +0000 UTC m=+8.330468433 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:33 compute-0 podman[161546]: 2025-10-10 09:56:33.672292281 +0000 UTC m=+0.060605820 container create e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:56:33 compute-0 podman[161546]: 2025-10-10 09:56:33.644507058 +0000 UTC m=+0.032820627 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:33 compute-0 python3[161219]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:33.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:33 compute-0 sudo[161217]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:33 compute-0 podman[161627]: 2025-10-10 09:56:33.912189435 +0000 UTC m=+0.052000223 container create 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:56:33 compute-0 systemd[1]: Started libpod-conmon-7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b.scope.
Oct 10 09:56:33 compute-0 podman[161627]: 2025-10-10 09:56:33.890474797 +0000 UTC m=+0.030285615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:33 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:34 compute-0 podman[161627]: 2025-10-10 09:56:34.019994502 +0000 UTC m=+0.159805300 container init 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:34 compute-0 podman[161627]: 2025-10-10 09:56:34.02928851 +0000 UTC m=+0.169099298 container start 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:34 compute-0 ceph-mon[73551]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:34 compute-0 podman[161627]: 2025-10-10 09:56:34.039359894 +0000 UTC m=+0.179170672 container attach 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:56:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:34 compute-0 frosty_pasteur[161667]: 167 167
Oct 10 09:56:34 compute-0 systemd[1]: libpod-7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b.scope: Deactivated successfully.
Oct 10 09:56:34 compute-0 podman[161627]: 2025-10-10 09:56:34.04702062 +0000 UTC m=+0.186831398 container died 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af08ed3d472095f4e13553289f00470c003db3f9a30fa3d0851f1c4e714bf8c-merged.mount: Deactivated successfully.
Oct 10 09:56:34 compute-0 podman[161627]: 2025-10-10 09:56:34.095481959 +0000 UTC m=+0.235292737 container remove 7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pasteur, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:56:34 compute-0 systemd[1]: libpod-conmon-7927fae744bd66013be66700931c739fe0fe53c226c2e5cf0f1aa52cb3c8d49b.scope: Deactivated successfully.
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.291598885 +0000 UTC m=+0.058011317 container create 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 09:56:34 compute-0 systemd[1]: Started libpod-conmon-97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1.scope.
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.274914958 +0000 UTC m=+0.041327410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.387526629 +0000 UTC m=+0.153939081 container init 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.39751003 +0000 UTC m=+0.163922472 container start 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.402093767 +0000 UTC m=+0.168506229 container attach 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:56:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:34 compute-0 nervous_lalande[161709]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:56:34 compute-0 nervous_lalande[161709]: --> All data devices are unavailable
Oct 10 09:56:34 compute-0 systemd[1]: libpod-97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1.scope: Deactivated successfully.
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.762840597 +0000 UTC m=+0.529253039 container died 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:56:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-424a5be4e3629bbd5f43366f779603373620c6b52cadadc1ceaa47e1b127494f-merged.mount: Deactivated successfully.
Oct 10 09:56:34 compute-0 podman[161693]: 2025-10-10 09:56:34.813778635 +0000 UTC m=+0.580191057 container remove 97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 09:56:34 compute-0 systemd[1]: libpod-conmon-97ebba52a774848e5d1b7cb1221caa4d6972a7b661d4d38eb8f0609539d563b1.scope: Deactivated successfully.
Oct 10 09:56:34 compute-0 sudo[161499]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:34 compute-0 sudo[161796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:56:34 compute-0 sudo[161796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:34 compute-0 sudo[161796]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:34 compute-0 sudo[161846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:56:34 compute-0 sudo[161846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:35 compute-0 sudo[161912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foutcmsjwhizccwpwxjbmyvbxvbusvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090194.7528229-1271-97280453155079/AnsiballZ_stat.py'
Oct 10 09:56:35 compute-0 sudo[161912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:35 compute-0 ceph-mon[73551]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:35 compute-0 python3.9[161914]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:35 compute-0 sudo[161912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.437161119 +0000 UTC m=+0.046340740 container create aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:35 compute-0 systemd[1]: Started libpod-conmon-aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee.scope.
Oct 10 09:56:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.418740047 +0000 UTC m=+0.027919698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.521878244 +0000 UTC m=+0.131057935 container init aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.529188658 +0000 UTC m=+0.138368309 container start aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 10 09:56:35 compute-0 interesting_banzai[161999]: 167 167
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.533146906 +0000 UTC m=+0.142326527 container attach aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 09:56:35 compute-0 systemd[1]: libpod-aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee.scope: Deactivated successfully.
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.535262294 +0000 UTC m=+0.144441915 container died aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a048bcd9acc742ffeb856ae0bd1132ea9716cb930f8b60c3eadbbd77cfff303b-merged.mount: Deactivated successfully.
Oct 10 09:56:35 compute-0 podman[161982]: 2025-10-10 09:56:35.579146745 +0000 UTC m=+0.188326366 container remove aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 09:56:35 compute-0 systemd[1]: libpod-conmon-aba44f0c91aada2b05bc639588197f542edab255942407a355b658a41347e9ee.scope: Deactivated successfully.
Oct 10 09:56:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:35.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:35 compute-0 podman[162074]: 2025-10-10 09:56:35.787130372 +0000 UTC m=+0.048990665 container create a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:35 compute-0 systemd[1]: Started libpod-conmon-a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f.scope.
Oct 10 09:56:35 compute-0 podman[162074]: 2025-10-10 09:56:35.765364813 +0000 UTC m=+0.027225126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec34b05bf55cea34c137e33bee7062ec4ae110858ba8211293483f1647c200c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec34b05bf55cea34c137e33bee7062ec4ae110858ba8211293483f1647c200c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec34b05bf55cea34c137e33bee7062ec4ae110858ba8211293483f1647c200c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec34b05bf55cea34c137e33bee7062ec4ae110858ba8211293483f1647c200c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:35 compute-0 podman[162074]: 2025-10-10 09:56:35.883399778 +0000 UTC m=+0.145260081 container init a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:56:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:35.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:35 compute-0 podman[162074]: 2025-10-10 09:56:35.892255443 +0000 UTC m=+0.154115736 container start a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:56:35 compute-0 podman[162074]: 2025-10-10 09:56:35.896576932 +0000 UTC m=+0.158437245 container attach a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 09:56:35 compute-0 sudo[162168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkljmxfclnbedlxtqnaqywkaxhywknfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090195.6477478-1298-60539183558409/AnsiballZ_file.py'
Oct 10 09:56:35 compute-0 sudo[162168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:36 compute-0 python3.9[162170]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:36 compute-0 sudo[162168]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-0 eager_moser[162121]: {
Oct 10 09:56:36 compute-0 eager_moser[162121]:     "0": [
Oct 10 09:56:36 compute-0 eager_moser[162121]:         {
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "devices": [
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "/dev/loop3"
Oct 10 09:56:36 compute-0 eager_moser[162121]:             ],
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "lv_name": "ceph_lv0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "lv_size": "21470642176",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "name": "ceph_lv0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "tags": {
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.cluster_name": "ceph",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.crush_device_class": "",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.encrypted": "0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.osd_id": "0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.type": "block",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.vdo": "0",
Oct 10 09:56:36 compute-0 eager_moser[162121]:                 "ceph.with_tpm": "0"
Oct 10 09:56:36 compute-0 eager_moser[162121]:             },
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "type": "block",
Oct 10 09:56:36 compute-0 eager_moser[162121]:             "vg_name": "ceph_vg0"
Oct 10 09:56:36 compute-0 eager_moser[162121]:         }
Oct 10 09:56:36 compute-0 eager_moser[162121]:     ]
Oct 10 09:56:36 compute-0 eager_moser[162121]: }
Oct 10 09:56:36 compute-0 systemd[1]: libpod-a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f.scope: Deactivated successfully.
Oct 10 09:56:36 compute-0 podman[162074]: 2025-10-10 09:56:36.207263002 +0000 UTC m=+0.469123285 container died a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec34b05bf55cea34c137e33bee7062ec4ae110858ba8211293483f1647c200c-merged.mount: Deactivated successfully.
Oct 10 09:56:36 compute-0 podman[162074]: 2025-10-10 09:56:36.254250502 +0000 UTC m=+0.516110785 container remove a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_moser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:56:36 compute-0 systemd[1]: libpod-conmon-a3c04f1f0045f99a55e2e69f7d5be2f3e363bb7ba6c7b1a79cc90443ede3e34f.scope: Deactivated successfully.
Oct 10 09:56:36 compute-0 sudo[161846]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-0 sudo[162233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:56:36 compute-0 sudo[162233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:36 compute-0 sudo[162233]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-0 sudo[162289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynlaxrosjhanlopwiemksdxsqygugdsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090195.6477478-1298-60539183558409/AnsiballZ_stat.py'
Oct 10 09:56:36 compute-0 sudo[162289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:36 compute-0 sudo[162283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:56:36 compute-0 sudo[162283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:36 compute-0 python3.9[162307]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:36 compute-0 sudo[162289]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.805214869 +0000 UTC m=+0.041325591 container create 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:36 compute-0 systemd[1]: Started libpod-conmon-2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc.scope.
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.78503953 +0000 UTC m=+0.021150272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.905279946 +0000 UTC m=+0.141390678 container init 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.91471297 +0000 UTC m=+0.150823682 container start 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.918535062 +0000 UTC m=+0.154645804 container attach 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 09:56:36 compute-0 eloquent_chatelet[162429]: 167 167
Oct 10 09:56:36 compute-0 systemd[1]: libpod-2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc.scope: Deactivated successfully.
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.922633434 +0000 UTC m=+0.158744166 container died 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:56:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5edcdeb61e0a2c1a157379de8c3de3f673d7383841bba8a89a3583c201540a9-merged.mount: Deactivated successfully.
Oct 10 09:56:36 compute-0 podman[162405]: 2025-10-10 09:56:36.970356279 +0000 UTC m=+0.206467011 container remove 2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:56:36 compute-0 systemd[1]: libpod-conmon-2c22583b1c4da6746a971311b11a78c5b90ec90ddf4394814a52960fec74bffc.scope: Deactivated successfully.
Oct 10 09:56:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:56:37 compute-0 sudo[162542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuetbmhnacfhlkmsgmoitmbefimlokju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.665468-1298-156152985257994/AnsiballZ_copy.py'
Oct 10 09:56:37 compute-0 sudo[162542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:37 compute-0 podman[162540]: 2025-10-10 09:56:37.157465585 +0000 UTC m=+0.046833606 container create f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:37 compute-0 systemd[1]: Started libpod-conmon-f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224.scope.
Oct 10 09:56:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c844aef46412ace2c47787616a097b32b1474dd868d545d19d731ad39f18c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c844aef46412ace2c47787616a097b32b1474dd868d545d19d731ad39f18c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c844aef46412ace2c47787616a097b32b1474dd868d545d19d731ad39f18c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c844aef46412ace2c47787616a097b32b1474dd868d545d19d731ad39f18c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:37 compute-0 podman[162540]: 2025-10-10 09:56:37.138003099 +0000 UTC m=+0.027371140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:37 compute-0 podman[162540]: 2025-10-10 09:56:37.247593843 +0000 UTC m=+0.136961874 container init f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:37 compute-0 podman[162540]: 2025-10-10 09:56:37.254570997 +0000 UTC m=+0.143939008 container start f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:56:37 compute-0 podman[162540]: 2025-10-10 09:56:37.258287837 +0000 UTC m=+0.147655878 container attach f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:56:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:37 compute-0 python3.9[162556]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090196.665468-1298-156152985257994/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:37 compute-0 sudo[162542]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:37] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:37] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:37 compute-0 ceph-mon[73551]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:37 compute-0 sudo[162656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuounvzekdrjhktmzbcxnslodkpuwfls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.665468-1298-156152985257994/AnsiballZ_systemd.py'
Oct 10 09:56:37 compute-0 sudo[162656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:37.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:37.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:37 compute-0 python3.9[162662]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:56:37 compute-0 lvm[162709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:56:37 compute-0 lvm[162709]: VG ceph_vg0 finished
Oct 10 09:56:37 compute-0 systemd[1]: Reloading.
Oct 10 09:56:37 compute-0 cranky_hertz[162559]: {}
Oct 10 09:56:38 compute-0 podman[162540]: 2025-10-10 09:56:38.001947569 +0000 UTC m=+0.891315620 container died f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:56:38 compute-0 systemd-rc-local-generator[162750]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:38 compute-0 systemd-sysv-generator[162754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:38 compute-0 systemd[1]: libpod-f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224.scope: Deactivated successfully.
Oct 10 09:56:38 compute-0 systemd[1]: libpod-f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224.scope: Consumed 1.128s CPU time.
Oct 10 09:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c844aef46412ace2c47787616a097b32b1474dd868d545d19d731ad39f18c9-merged.mount: Deactivated successfully.
Oct 10 09:56:38 compute-0 podman[162540]: 2025-10-10 09:56:38.284296167 +0000 UTC m=+1.173664178 container remove f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hertz, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:56:38 compute-0 sudo[162656]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:38 compute-0 systemd[1]: libpod-conmon-f516f3a8ee9e148fae698ab1bc3a8cad79a624db06222294a4a88fccbba65224.scope: Deactivated successfully.
Oct 10 09:56:38 compute-0 sudo[162283]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:56:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:56:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:56:38 compute-0 sudo[162770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:56:38 compute-0 sudo[162770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:38 compute-0 sudo[162770]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:38 compute-0 sudo[162860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlobpxolxbmeojwtndgerzjobzufyts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.665468-1298-156152985257994/AnsiballZ_systemd.py'
Oct 10 09:56:38 compute-0 sudo[162860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:38 compute-0 python3.9[162862]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:38 compute-0 systemd[1]: Reloading.
Oct 10 09:56:39 compute-0 systemd-sysv-generator[162895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:39 compute-0 systemd-rc-local-generator[162892]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:39 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 10 09:56:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad894168266941bb710bdb6ffce0145d903dcb49685048d7673a8680e8ceedaf/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad894168266941bb710bdb6ffce0145d903dcb49685048d7673a8680e8ceedaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:39 compute-0 ceph-mon[73551]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:56:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0.
Oct 10 09:56:39 compute-0 podman[162903]: 2025-10-10 09:56:39.497529889 +0000 UTC m=+0.139395943 container init e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + sudo -E kolla_set_configs
Oct 10 09:56:39 compute-0 podman[162903]: 2025-10-10 09:56:39.536096719 +0000 UTC m=+0.177962773 container start e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 10 09:56:39 compute-0 edpm-start-podman-container[162903]: ovn_metadata_agent
Oct 10 09:56:39 compute-0 podman[162924]: 2025-10-10 09:56:39.636017322 +0000 UTC m=+0.087387561 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:56:39 compute-0 edpm-start-podman-container[162902]: Creating additional drop-in dependency for "ovn_metadata_agent" (e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0)
Oct 10 09:56:39 compute-0 systemd[1]: Reloading.
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Validating config file
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Copying service configuration files
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Writing out command to execute
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 10 09:56:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:39.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: ++ cat /run_command
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + CMD=neutron-ovn-metadata-agent
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + ARGS=
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + sudo kolla_copy_cacerts
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + [[ ! -n '' ]]
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + . kolla_extend_start
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: Running command: 'neutron-ovn-metadata-agent'
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + umask 0022
Oct 10 09:56:39 compute-0 ovn_metadata_agent[162919]: + exec neutron-ovn-metadata-agent
Oct 10 09:56:39 compute-0 systemd-rc-local-generator[162992]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:39 compute-0 systemd-sysv-generator[162998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:39.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:40 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 10 09:56:40 compute-0 sudo[162860]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:41 compute-0 sshd-session[153720]: Connection closed by 192.168.122.30 port 34002
Oct 10 09:56:41 compute-0 sshd-session[153717]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:56:41 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 10 09:56:41 compute-0 systemd[1]: session-53.scope: Consumed 57.998s CPU time.
Oct 10 09:56:41 compute-0 systemd-logind[806]: Session 53 logged out. Waiting for processes to exit.
Oct 10 09:56:41 compute-0 systemd-logind[806]: Removed session 53.
Oct 10 09:56:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.836 162925 INFO neutron.common.config [-] Logging enabled!
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.836 162925 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.836 162925 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.837 162925 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.838 162925 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.839 162925 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.840 162925 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.841 162925 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.842 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.843 162925 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.844 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.845 162925 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.846 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.847 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.848 162925 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.849 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.850 162925 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.851 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.852 162925 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.853 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.854 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.855 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.856 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.857 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.858 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.859 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.860 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.861 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.862 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.863 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.864 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.865 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.866 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.867 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.868 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.869 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.870 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.871 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.872 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.873 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.874 162925 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.883 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.884 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.884 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.884 162925 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.884 162925 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 10 09:56:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:41.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.897 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a1a60c06-0b75-41d0-88d4-dc571cb95004 (UUID: a1a60c06-0b75-41d0-88d4-dc571cb95004) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.917 162925 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.917 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.918 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.918 162925 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.921 162925 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.926 162925 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.931 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a1a60c06-0b75-41d0-88d4-dc571cb95004'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], external_ids={}, name=a1a60c06-0b75-41d0-88d4-dc571cb95004, nb_cfg_timestamp=1760090136030, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.932 162925 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcd21753f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.933 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.933 162925 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.933 162925 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.934 162925 INFO oslo_service.service [-] Starting 1 workers
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.938 162925 DEBUG oslo_service.service [-] Started child 163032 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.942 162925 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp9qksurfp/privsep.sock']
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.942 163032 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-951266'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 10 09:56:41 compute-0 ceph-mon[73551]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.965 163032 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.965 163032 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.965 163032 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.969 163032 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.974 163032 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:56:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:41.980 163032 INFO eventlet.wsgi.server [-] (163032) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 10 09:56:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:42 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.700 162925 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.701 162925 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9qksurfp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.546 163038 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.552 163038 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.555 163038 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.555 163038 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163038
Oct 10 09:56:42 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:42.705 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[3718be5a-a0ec-4ad6-b42e-07c770de6323]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 09:56:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:43 compute-0 ceph-mon[73551]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.264 163038 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.264 163038 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.264 163038 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:56:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.844 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[1a923032-425f-498c-a203-6e9b29e6cf4c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.847 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, column=external_ids, values=({'neutron:ovn-metadata-id': '2c2e36f2-a8a4-5544-8420-c4d663803606'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.857 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.863 162925 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.863 162925 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.863 162925 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.864 162925 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.865 162925 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.866 162925 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.867 162925 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.867 162925 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.867 162925 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.867 162925 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.868 162925 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.869 162925 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.870 162925 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.871 162925 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.872 162925 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.873 162925 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.874 162925 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.875 162925 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.876 162925 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.877 162925 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.878 162925 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.879 162925 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.880 162925 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.880 162925 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.880 162925 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.880 162925 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.880 162925 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.881 162925 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.882 162925 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.883 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.884 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.885 162925 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.886 162925 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.887 162925 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.888 162925 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.889 162925 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.890 162925 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.892 162925 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.892 162925 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.892 162925 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.893 162925 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.894 162925 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.895 162925 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.896 162925 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.896 162925 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.896 162925 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.896 162925 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.896 162925 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.897 162925 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.897 162925 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.897 162925 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.897 162925 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.897 162925 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.898 162925 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.899 162925 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.900 162925 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.901 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.902 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.903 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.904 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.905 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.906 162925 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.907 162925 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.907 162925 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:56:43.907 162925 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 09:56:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:45 compute-0 ceph-mon[73551]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:45.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:45.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:56:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:56:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:46 compute-0 sshd-session[163047]: Accepted publickey for zuul from 192.168.122.30 port 37382 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:56:46 compute-0 systemd-logind[806]: New session 54 of user zuul.
Oct 10 09:56:46 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 10 09:56:46 compute-0 sshd-session[163047]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:56:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:47.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:56:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:47.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:56:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:47] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:47] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:47 compute-0 python3.9[163202]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:56:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:47.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:47 compute-0 ceph-mon[73551]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:48 compute-0 sudo[163357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzfjkyjuwsnomzwsynsmdppocewkqkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090208.29363-62-32617749413392/AnsiballZ_command.py'
Oct 10 09:56:48 compute-0 sudo[163357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:48 compute-0 python3.9[163360]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:56:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:49 compute-0 sudo[163357]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:49.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:49 compute-0 ceph-mon[73551]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:49.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:50 compute-0 sudo[163524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyxezbnvrlzzbnvanbdhvdjepzbzbyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090209.5324905-95-111910087646874/AnsiballZ_systemd_service.py'
Oct 10 09:56:50 compute-0 sudo[163524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095650 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:56:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:50 compute-0 python3.9[163526]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:56:50 compute-0 systemd[1]: Reloading.
Oct 10 09:56:50 compute-0 systemd-rc-local-generator[163555]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:50 compute-0 systemd-sysv-generator[163559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:50 compute-0 sudo[163524]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:51.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:51 compute-0 python3.9[163713]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:56:51 compute-0 ceph-mon[73551]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:51 compute-0 network[163730]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:56:51 compute-0 network[163731]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:56:51 compute-0 network[163732]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:56:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:56:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:56:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:52 compute-0 sudo[163745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:52 compute-0 sudo[163745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:52 compute-0 sudo[163745]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:53.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:53 compute-0 ceph-mon[73551]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:55 compute-0 ceph-mon[73551]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:56 compute-0 sudo[164026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvcejpgrizeujedfchhqhgoxziucrgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090216.0206907-152-119766440016724/AnsiballZ_systemd_service.py'
Oct 10 09:56:56 compute-0 sudo[164026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:56 compute-0 python3.9[164028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:56 compute-0 sudo[164026]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:57.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:56:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:57.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:56:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:56:57.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:56:57 compute-0 sudo[164180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icopqceqwgqdjysrjurxkfpzyrtdpifm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090216.8460782-152-49262603838374/AnsiballZ_systemd_service.py'
Oct 10 09:56:57 compute-0 sudo[164180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:57] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:56:57] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 10 09:56:57 compute-0 python3.9[164182]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:57 compute-0 sudo[164180]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:57.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:57 compute-0 ceph-mon[73551]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:57 compute-0 sudo[164333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjjoqcyvwykswvqvmlfvkttleywdgmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090217.640984-152-208521989814275/AnsiballZ_systemd_service.py'
Oct 10 09:56:57 compute-0 sudo[164333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:58 compute-0 python3.9[164335]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:58 compute-0 sudo[164333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:56:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:56:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:58 compute-0 sudo[164488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekanlkkpnbpeppvtcqrcmpekctixexwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090218.462256-152-278190307782460/AnsiballZ_systemd_service.py'
Oct 10 09:56:58 compute-0 sudo[164488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:59 compute-0 python3.9[164490]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:59 compute-0 sudo[164488]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:56:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:59 compute-0 sudo[164641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdctqmbyydqxfeqfbwvpgumfsmatspz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090219.304718-152-27692592108886/AnsiballZ_systemd_service.py'
Oct 10 09:56:59 compute-0 sudo[164641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:59.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:56:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:56:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:59.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:56:59 compute-0 ceph-mon[73551]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:56:59 compute-0 python3.9[164643]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:00 compute-0 sudo[164641]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:00 compute-0 sudo[164795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaspmhiefbhylggkybbhludsnoninznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090220.1652884-152-32728744781150/AnsiballZ_systemd_service.py'
Oct 10 09:57:00 compute-0 sudo[164795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:00 compute-0 python3.9[164797]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:00 compute-0 sudo[164795]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:01 compute-0 sudo[164949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqgfxbxdkbhphlebrhvbfqdfkbxnqxjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090220.9677153-152-195281270174014/AnsiballZ_systemd_service.py'
Oct 10 09:57:01 compute-0 sudo[164949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:57:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:57:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:57:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:57:01 compute-0 python3.9[164951]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:01 compute-0 sudo[164949]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:57:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:01.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:57:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:01 compute-0 ceph-mon[73551]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:02 compute-0 podman[164978]: 2025-10-10 09:57:02.269890669 +0000 UTC m=+0.111849437 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 10 09:57:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:03 compute-0 sudo[165130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqgkfzjigatvkxoayirjtnddwuqkgpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090222.6852074-308-160823009602962/AnsiballZ_file.py'
Oct 10 09:57:03 compute-0 sudo[165130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:03 compute-0 python3.9[165132]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:03 compute-0 sudo[165130]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 10 09:57:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 10 09:57:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:03.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 10 09:57:03 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 10 09:57:03 compute-0 sudo[165282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxfomiaqbhtrmnmravqrwwpwrgitpsns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090223.6009316-308-80633896792109/AnsiballZ_file.py'
Oct 10 09:57:03 compute-0 sudo[165282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:03 compute-0 ceph-mon[73551]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:04 compute-0 python3.9[165284]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:04 compute-0 sudo[165282]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:04 compute-0 sudo[165435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmmwuudrjnxhmraqrkmgherfdoehwfeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090224.2431915-308-105341464198436/AnsiballZ_file.py'
Oct 10 09:57:04 compute-0 sudo[165435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:57:04 compute-0 python3.9[165437]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:04 compute-0 sudo[165435]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:05 compute-0 ceph-mon[73551]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:05 compute-0 sudo[165590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzebuvfsloutwhvqlydzjrxaohgslxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090224.907512-308-175663296288908/AnsiballZ_file.py'
Oct 10 09:57:05 compute-0 sudo[165590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:05 compute-0 python3.9[165592]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:05 compute-0 sudo[165590]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:57:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:57:05 compute-0 sudo[165742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audtauzlenxnpxfljtrugffzurgskwed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090225.6103444-308-61026760257864/AnsiballZ_file.py'
Oct 10 09:57:05 compute-0 sudo[165742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:57:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:57:06 compute-0 python3.9[165744]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:06 compute-0 sudo[165742]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:06 compute-0 sudo[165895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyqwdcvlcphrjsbzkmfqkyjyaumszlbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090226.3308976-308-18349640748952/AnsiballZ_file.py'
Oct 10 09:57:06 compute-0 sudo[165895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:06 compute-0 python3.9[165897]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:06 compute-0 sudo[165895]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:07.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:57:07 compute-0 sudo[166048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxiwtsogopaefmudriqbytbjuswuuxgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090227.0095685-308-209801373351122/AnsiballZ_file.py'
Oct 10 09:57:07 compute-0 sudo[166048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:07] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:57:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:07] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 10 09:57:07 compute-0 python3.9[166050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:07 compute-0 ceph-mon[73551]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:07 compute-0 sudo[166048]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 1023 B/s wr, 146 op/s
Oct 10 09:57:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:08 compute-0 sudo[166202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnwhtdommtgwixdtwxiqkcscticrfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090228.4839442-458-249517684212697/AnsiballZ_file.py'
Oct 10 09:57:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:08 compute-0 sudo[166202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:09 compute-0 python3.9[166204]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:09 compute-0 sudo[166202]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:09 compute-0 sudo[166354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frwepdwnxtkvlfqqliheucotkepgykxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090229.200081-458-208111570645690/AnsiballZ_file.py'
Oct 10 09:57:09 compute-0 sudo[166354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:09 compute-0 ceph-mon[73551]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 1023 B/s wr, 146 op/s
Oct 10 09:57:09 compute-0 python3.9[166356]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:09 compute-0 sudo[166354]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:09.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:10 compute-0 sudo[166517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngoeoyxzzrdwoypnilirxoddhzqzgtuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090229.834695-458-2999410669855/AnsiballZ_file.py'
Oct 10 09:57:10 compute-0 sudo[166517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:10 compute-0 podman[166481]: 2025-10-10 09:57:10.164070061 +0000 UTC m=+0.064683661 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:57:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095710 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:57:10 compute-0 python3.9[166527]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:10 compute-0 sudo[166517]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:10 compute-0 sudo[166678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adqvytzekuahwlckklfeosgcjxmbnaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090230.5232701-458-179704091810617/AnsiballZ_file.py'
Oct 10 09:57:10 compute-0 sudo[166678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:11 compute-0 python3.9[166680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:11 compute-0 sudo[166678]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:11 compute-0 ceph-mon[73551]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:11 compute-0 sudo[166830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjuqbrsdmjxhcptgtwbflyrmjcgvaqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090231.235996-458-193068082780439/AnsiballZ_file.py'
Oct 10 09:57:11 compute-0 sudo[166830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:11.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:11 compute-0 python3.9[166832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:11 compute-0 sudo[166830]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:57:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:11.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:57:12 compute-0 sudo[166983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvicucguvvwuzdkvozbowhocghzlafun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090231.9453382-458-270022810814453/AnsiballZ_file.py'
Oct 10 09:57:12 compute-0 sudo[166983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:12 compute-0 python3.9[166985]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:12 compute-0 sudo[166983]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:12 compute-0 sudo[167087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:12 compute-0 sudo[167087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:12 compute-0 sudo[167087]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:13 compute-0 sudo[167161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytorapojibjsthhsugspomctforboloy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090232.6664817-458-140002751478945/AnsiballZ_file.py'
Oct 10 09:57:13 compute-0 sudo[167161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:13 compute-0 python3.9[167163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:13 compute-0 sudo[167161]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:13 compute-0 ceph-mon[73551]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:57:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:57:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:13.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:14 compute-0 sudo[167314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpewfsjslagczhdizbkrrmomsiphhgrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090233.8744357-611-4439085726371/AnsiballZ_command.py'
Oct 10 09:57:14 compute-0 sudo[167314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:14 compute-0 python3.9[167316]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:14 compute-0 sudo[167314]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:15 compute-0 python3.9[167469]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:57:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:15 compute-0 ceph-mon[73551]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:15.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:15.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:16 compute-0 sudo[167619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjzhetdmoezrjdcwzgfbknmhevjkulex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090235.7199192-665-84793777919365/AnsiballZ_systemd_service.py'
Oct 10 09:57:16 compute-0 sudo[167619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:57:16
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root']
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:57:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:57:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:16 compute-0 python3.9[167621]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:57:16 compute-0 systemd[1]: Reloading.
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:16 compute-0 systemd-sysv-generator[167649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:57:16 compute-0 systemd-rc-local-generator[167644]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:57:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:57:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:16 compute-0 sudo[167619]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:17.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:57:17 compute-0 sudo[167808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qybcelptnpftfdwjrrylxrdxcbpmhkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090237.0285537-689-161049095260711/AnsiballZ_command.py'
Oct 10 09:57:17 compute-0 sudo[167808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:17 compute-0 python3.9[167810]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:17 compute-0 sudo[167808]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:17 compute-0 ceph-mon[73551]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:17.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:18 compute-0 sudo[167961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfcvvjpjjkvapxfplevqrjjttkmxdsvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090237.712835-689-56185779790940/AnsiballZ_command.py'
Oct 10 09:57:18 compute-0 sudo[167961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:18 compute-0 python3.9[167964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:18 compute-0 sudo[167961]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095718 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:57:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:18 compute-0 sudo[168115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtudrekiydlulhkawwxshyyhvvdedhiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090238.391664-689-225373034882828/AnsiballZ_command.py'
Oct 10 09:57:18 compute-0 sudo[168115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:18 compute-0 python3.9[168117]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:18 compute-0 sudo[168115]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:19 compute-0 sudo[168270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbskkqgvcadfihrrsvjzqjrwzaraexma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090239.0675073-689-241921092970486/AnsiballZ_command.py'
Oct 10 09:57:19 compute-0 sudo[168270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:19 compute-0 python3.9[168272]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:19 compute-0 sudo[168270]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:19 compute-0 ceph-mon[73551]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:19.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:19.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:20 compute-0 sudo[168424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlgituwpwfsboaxubybfdyprwcgruyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090239.7862854-689-258466532643904/AnsiballZ_command.py'
Oct 10 09:57:20 compute-0 sudo[168424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:20 compute-0 python3.9[168426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:20 compute-0 sudo[168424]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:20 compute-0 sudo[168577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhdzuwrokcxrpgfmbrrqlqzeithiwcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090240.4219959-689-75326832202641/AnsiballZ_command.py'
Oct 10 09:57:20 compute-0 sudo[168577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:20 compute-0 python3.9[168580]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:20 compute-0 sudo[168577]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:21 compute-0 sudo[168731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmuhaqogtqtzsnkrhbuzbstuxffxotwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090241.1278446-689-72838465854881/AnsiballZ_command.py'
Oct 10 09:57:21 compute-0 sudo[168731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:21 compute-0 ceph-mon[73551]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:21 compute-0 python3.9[168733]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:21 compute-0 sudo[168731]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:21.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:21.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:23 compute-0 sudo[168886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibkajrkilqqqjjolimggdsrmwqnefgaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090242.9839199-851-196446346898574/AnsiballZ_getent.py'
Oct 10 09:57:23 compute-0 sudo[168886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:23 compute-0 python3.9[168888]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 10 09:57:23 compute-0 sudo[168886]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:23 compute-0 ceph-mon[73551]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:23.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:23.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:24 compute-0 sudo[169040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjvmsbxdcgigajbhgvsrylvirtfagxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090243.87864-875-7340637595244/AnsiballZ_group.py'
Oct 10 09:57:24 compute-0 sudo[169040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:24 compute-0 python3.9[169042]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:57:24 compute-0 groupadd[169043]: group added to /etc/group: name=libvirt, GID=42473
Oct 10 09:57:24 compute-0 groupadd[169043]: group added to /etc/gshadow: name=libvirt
Oct 10 09:57:24 compute-0 groupadd[169043]: new group: name=libvirt, GID=42473
Oct 10 09:57:24 compute-0 sudo[169040]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:25 compute-0 sudo[169199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkufplrqhnnvpkfrgvubzcquxgnfohcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090244.909985-899-49266737320898/AnsiballZ_user.py'
Oct 10 09:57:25 compute-0 sudo[169199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:25 compute-0 ceph-mon[73551]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:25 compute-0 python3.9[169201]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:57:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:25 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:57:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:25 compute-0 useradd[169203]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:57:25 compute-0 sudo[169199]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:25.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:26 compute-0 sudo[169361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khgjbllzljcysnhdahzfaculljozugfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090246.3240619-932-139981636179816/AnsiballZ_setup.py'
Oct 10 09:57:26 compute-0 sudo[169361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:26 compute-0 python3.9[169363]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:57:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:27.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:57:27 compute-0 sudo[169361]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:27 compute-0 sudo[169447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brxhaakysejzvaeioiiltrotonztocrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090246.3240619-932-139981636179816/AnsiballZ_dnf.py'
Oct 10 09:57:27 compute-0 sudo[169447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:57:27 compute-0 ceph-mon[73551]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:27 compute-0 python3.9[169449]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:57:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:29 compute-0 ceph-mon[73551]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:29.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:57:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:57:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:57:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:31 compute-0 ceph-mon[73551]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 09:57:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 09:57:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:57:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:57:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:33 compute-0 sudo[169465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:33 compute-0 sudo[169465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:33 compute-0 sudo[169465]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:33 compute-0 podman[169489]: 2025-10-10 09:57:33.202310866 +0000 UTC m=+0.111687793 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 09:57:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:57:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 09:57:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 09:57:33 compute-0 ceph-mon[73551]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:33.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:35 compute-0 ceph-mon[73551]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:35.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:37.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:57:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:37.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:57:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:37] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:57:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:37] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:57:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:37 compute-0 ceph-mon[73551]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:37.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:57:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:38 compute-0 sudo[169612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:57:38 compute-0 sudo[169612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:38 compute-0 sudo[169612]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:38 compute-0 sudo[169641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:57:38 compute-0 sudo[169641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:39 compute-0 sudo[169641]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:57:39 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-0 sudo[169721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:57:39 compute-0 sudo[169721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:39 compute-0 sudo[169721]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:39 compute-0 sudo[169748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:57:39 compute-0 sudo[169748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:39.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:39 compute-0 ceph-mon[73551]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:57:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:39.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.073529084 +0000 UTC m=+0.041232671 container create 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 09:57:40 compute-0 systemd[1]: Started libpod-conmon-76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10.scope.
Oct 10 09:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.145916953 +0000 UTC m=+0.113620560 container init 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.055092189 +0000 UTC m=+0.022795796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.15321398 +0000 UTC m=+0.120917577 container start 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.155876552 +0000 UTC m=+0.123580169 container attach 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:57:40 compute-0 focused_austin[169853]: 167 167
Oct 10 09:57:40 compute-0 systemd[1]: libpod-76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10.scope: Deactivated successfully.
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.159096026 +0000 UTC m=+0.126799613 container died 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-19eeb1f0a04ddefb0b6304912a834a88b2b4e9fc06d345a229e8b8df472de27c-merged.mount: Deactivated successfully.
Oct 10 09:57:40 compute-0 podman[169832]: 2025-10-10 09:57:40.202626359 +0000 UTC m=+0.170329946 container remove 76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:57:40 compute-0 systemd[1]: libpod-conmon-76f14f6506e203d4e901ee48e77b84b3f5dabd5b2941418502a82d5cbc81cf10.scope: Deactivated successfully.
Oct 10 09:57:40 compute-0 podman[169874]: 2025-10-10 09:57:40.279195133 +0000 UTC m=+0.070314121 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.355249396 +0000 UTC m=+0.041404015 container create bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:57:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/095740 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:57:40 compute-0 systemd[1]: Started libpod-conmon-bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8.scope.
Oct 10 09:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.337726972 +0000 UTC m=+0.023881631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.439871596 +0000 UTC m=+0.126026245 container init bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.445918494 +0000 UTC m=+0.132073123 container start bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.44918912 +0000 UTC m=+0.135343769 container attach bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 09:57:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:40 compute-0 inspiring_wing[169927]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:57:40 compute-0 inspiring_wing[169927]: --> All data devices are unavailable
Oct 10 09:57:40 compute-0 systemd[1]: libpod-bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8.scope: Deactivated successfully.
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.815267575 +0000 UTC m=+0.501422204 container died bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ee12a6d57b17352e4d1061509ef99a3bf7d47c345b17ac358acd50a3c1e74a-merged.mount: Deactivated successfully.
Oct 10 09:57:40 compute-0 podman[169906]: 2025-10-10 09:57:40.852436012 +0000 UTC m=+0.538590641 container remove bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:57:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:40 compute-0 systemd[1]: libpod-conmon-bb466e54889b537343882a7ee8f919f52009ab6217a00abae7f8db65f2af65b8.scope: Deactivated successfully.
Oct 10 09:57:40 compute-0 sudo[169748]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:40 compute-0 sudo[169973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:57:40 compute-0 sudo[169973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:40 compute-0 sudo[169973]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:41 compute-0 sudo[169999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:57:41 compute-0 sudo[169999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.529477032 +0000 UTC m=+0.063044833 container create 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:57:41 compute-0 systemd[1]: Started libpod-conmon-50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61.scope.
Oct 10 09:57:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.507618278 +0000 UTC m=+0.041186139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.612598858 +0000 UTC m=+0.146166689 container init 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.619620499 +0000 UTC m=+0.153188320 container start 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.622915835 +0000 UTC m=+0.156483646 container attach 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:57:41 compute-0 exciting_khayyam[170076]: 167 167
Oct 10 09:57:41 compute-0 systemd[1]: libpod-50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61.scope: Deactivated successfully.
Oct 10 09:57:41 compute-0 conmon[170076]: conmon 50c5f7829a1cbb26c3e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61.scope/container/memory.events
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.630171292 +0000 UTC m=+0.163739103 container died 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b61dec4b54dc7682bf2f44ea5223753683284b41aae723d8349c1ed45e35711-merged.mount: Deactivated successfully.
Oct 10 09:57:41 compute-0 podman[170060]: 2025-10-10 09:57:41.668890965 +0000 UTC m=+0.202458776 container remove 50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:57:41 compute-0 systemd[1]: libpod-conmon-50c5f7829a1cbb26c3e6286da2ccebcf445dca907fe0ea6e2cd3b7404d4a9d61.scope: Deactivated successfully.
Oct 10 09:57:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:41 compute-0 podman[170099]: 2025-10-10 09:57:41.875877895 +0000 UTC m=+0.075341618 container create 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:57:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:57:41.877 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:57:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:57:41.877 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:57:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:57:41.877 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:57:41 compute-0 systemd[1]: Started libpod-conmon-4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9.scope.
Oct 10 09:57:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:41 compute-0 podman[170099]: 2025-10-10 09:57:41.844964242 +0000 UTC m=+0.044428065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:41 compute-0 ceph-mon[73551]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e208cc5c51cbd1eb90f6745ba2c1949201f93d1b80798a38eb3d9d56099275/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e208cc5c51cbd1eb90f6745ba2c1949201f93d1b80798a38eb3d9d56099275/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e208cc5c51cbd1eb90f6745ba2c1949201f93d1b80798a38eb3d9d56099275/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e208cc5c51cbd1eb90f6745ba2c1949201f93d1b80798a38eb3d9d56099275/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:41 compute-0 podman[170099]: 2025-10-10 09:57:41.957369171 +0000 UTC m=+0.156832954 container init 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 10 09:57:41 compute-0 podman[170099]: 2025-10-10 09:57:41.964761012 +0000 UTC m=+0.164224745 container start 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:57:41 compute-0 podman[170099]: 2025-10-10 09:57:41.968259583 +0000 UTC m=+0.167723366 container attach 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:57:42 compute-0 admiring_cohen[170116]: {
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:     "0": [
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:         {
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "devices": [
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "/dev/loop3"
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             ],
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "lv_name": "ceph_lv0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "lv_size": "21470642176",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "name": "ceph_lv0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "tags": {
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.cluster_name": "ceph",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.crush_device_class": "",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.encrypted": "0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.osd_id": "0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.type": "block",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.vdo": "0",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:                 "ceph.with_tpm": "0"
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             },
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "type": "block",
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:             "vg_name": "ceph_vg0"
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:         }
Oct 10 09:57:42 compute-0 admiring_cohen[170116]:     ]
Oct 10 09:57:42 compute-0 admiring_cohen[170116]: }
Oct 10 09:57:42 compute-0 systemd[1]: libpod-4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9.scope: Deactivated successfully.
Oct 10 09:57:42 compute-0 podman[170126]: 2025-10-10 09:57:42.344684976 +0000 UTC m=+0.031890945 container died 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 09:57:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-56e208cc5c51cbd1eb90f6745ba2c1949201f93d1b80798a38eb3d9d56099275-merged.mount: Deactivated successfully.
Oct 10 09:57:42 compute-0 podman[170126]: 2025-10-10 09:57:42.391516465 +0000 UTC m=+0.078722414 container remove 4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 09:57:42 compute-0 systemd[1]: libpod-conmon-4a718f5a6f96d180e7e643e0c6e293da4d5a6e175b2909a6636604d406936eb9.scope: Deactivated successfully.
Oct 10 09:57:42 compute-0 sudo[169999]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:42 compute-0 sudo[170141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:57:42 compute-0 sudo[170141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:42 compute-0 sudo[170141]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:42 compute-0 sudo[170166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:57:42 compute-0 sudo[170166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.081667678 +0000 UTC m=+0.037716770 container create 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:57:43 compute-0 systemd[1]: Started libpod-conmon-912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a.scope.
Oct 10 09:57:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.157899775 +0000 UTC m=+0.113948947 container init 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.065920815 +0000 UTC m=+0.021969917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.167598728 +0000 UTC m=+0.123647840 container start 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.172499851 +0000 UTC m=+0.128549023 container attach 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:57:43 compute-0 optimistic_goldstine[170250]: 167 167
Oct 10 09:57:43 compute-0 systemd[1]: libpod-912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a.scope: Deactivated successfully.
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.175173362 +0000 UTC m=+0.131222484 container died 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16c33ab2d09cbe42c2cc329a3f4ba41163dd150d3b0398ff69d6192092984a5-merged.mount: Deactivated successfully.
Oct 10 09:57:43 compute-0 podman[170233]: 2025-10-10 09:57:43.228803268 +0000 UTC m=+0.184852390 container remove 912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 09:57:43 compute-0 systemd[1]: libpod-conmon-912f6aaa8d8b17660c8b880287ce493d5dcb371c135409ba87c12d2842e9b64a.scope: Deactivated successfully.
Oct 10 09:57:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:43 compute-0 podman[170273]: 2025-10-10 09:57:43.457100549 +0000 UTC m=+0.079070163 container create 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:57:43 compute-0 systemd[1]: Started libpod-conmon-34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015.scope.
Oct 10 09:57:43 compute-0 podman[170273]: 2025-10-10 09:57:43.424887326 +0000 UTC m=+0.046857040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:57:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0ef1b05a3256de0506fc90d9ff92b6ffe354bd404c408f353fe103e931f22d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0ef1b05a3256de0506fc90d9ff92b6ffe354bd404c408f353fe103e931f22d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0ef1b05a3256de0506fc90d9ff92b6ffe354bd404c408f353fe103e931f22d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0ef1b05a3256de0506fc90d9ff92b6ffe354bd404c408f353fe103e931f22d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:57:43 compute-0 podman[170273]: 2025-10-10 09:57:43.561806721 +0000 UTC m=+0.183776355 container init 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:57:43 compute-0 podman[170273]: 2025-10-10 09:57:43.572531139 +0000 UTC m=+0.194500753 container start 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:57:43 compute-0 podman[170273]: 2025-10-10 09:57:43.577650847 +0000 UTC m=+0.199620461 container attach 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:57:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:43.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:43 compute-0 ceph-mon[73551]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:44 compute-0 lvm[170364]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:57:44 compute-0 lvm[170364]: VG ceph_vg0 finished
Oct 10 09:57:44 compute-0 determined_kilby[170289]: {}
Oct 10 09:57:44 compute-0 systemd[1]: libpod-34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015.scope: Deactivated successfully.
Oct 10 09:57:44 compute-0 systemd[1]: libpod-34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015.scope: Consumed 1.176s CPU time.
Oct 10 09:57:44 compute-0 podman[170273]: 2025-10-10 09:57:44.352421759 +0000 UTC m=+0.974391383 container died 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e0ef1b05a3256de0506fc90d9ff92b6ffe354bd404c408f353fe103e931f22d-merged.mount: Deactivated successfully.
Oct 10 09:57:44 compute-0 podman[170273]: 2025-10-10 09:57:44.412775969 +0000 UTC m=+1.034745583 container remove 34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:57:44 compute-0 systemd[1]: libpod-conmon-34b1db7af85484ab42860538a14d77992d030157003405560d93e7a421700015.scope: Deactivated successfully.
Oct 10 09:57:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:44 compute-0 sudo[170166]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:57:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:57:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:44 compute-0 sudo[170379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:57:44 compute-0 sudo[170379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:44 compute-0 sudo[170379]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:45 compute-0 ceph-mon[73551]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:45.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:57:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:57:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c001de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:47.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:47 compute-0 ceph-mon[73551]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 10 09:57:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:47.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 10 09:57:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 10 09:57:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 10 09:57:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:49 compute-0 ceph-mon[73551]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:49.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:49.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:57:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:51 compute-0 ceph-mon[73551]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000047s ======
Oct 10 09:57:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 10 09:57:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:51.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:53 compute-0 sudo[170421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:53 compute-0 sudo[170421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:53 compute-0 sudo[170421]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:53 compute-0 ceph-mon[73551]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:53.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000047s ======
Oct 10 09:57:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:53.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Oct 10 09:57:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:55 compute-0 ceph-mon[73551]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:56 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:57:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:57:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:57:57.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:57:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:57:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:57:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:57 compute-0 ceph-mon[73551]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:57.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:57.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:57:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:57:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:59.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:59 compute-0 ceph-mon[73551]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:57:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:57:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:57:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:59.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:58:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:01.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:01 compute-0 ceph-mon[73551]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:01.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:03.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:03 compute-0 ceph-mon[73551]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:04 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 10 09:58:04 compute-0 podman[170465]: 2025-10-10 09:58:04.252830797 +0000 UTC m=+0.098350508 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:58:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:06 compute-0 ceph-mon[73551]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:06 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:58:06 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:58:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:07.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:07 compute-0 ceph-mon[73551]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:58:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:58:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:07.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:09 compute-0 ceph-mon[73551]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:09.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:09.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:11 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 10 09:58:11 compute-0 podman[170505]: 2025-10-10 09:58:11.260754544 +0000 UTC m=+0.086330280 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:58:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:11 compute-0 ceph-mon[73551]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:11.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:13 compute-0 sudo[170526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:13 compute-0 sudo[170526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:13 compute-0 sudo[170526]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:13 compute-0 ceph-mon[73551]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:13.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:15.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:15 compute-0 ceph-mon[73551]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:15.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:58:16
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'vms', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:58:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:58:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:58:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:58:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:17.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:58:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:17.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:17] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 09:58:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:17] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 09:58:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:17.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:17.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:18 compute-0 ceph-mon[73551]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:18 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 09:58:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:19 compute-0 ceph-mon[73551]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:21 compute-0 ceph-mon[73551]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:22.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:23 compute-0 ceph-mon[73551]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 10 09:58:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 10 09:58:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:25 compute-0 ceph-mon[73551]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:25.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:27] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 09:58:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:27] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 09:58:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:27 compute-0 ceph-mon[73551]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:27.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:58:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:58:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:29 compute-0 ceph-mon[73551]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:29.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:58:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:31 compute-0 ceph-mon[73551]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:33 compute-0 sudo[178751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:33 compute-0 sudo[178751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:33 compute-0 sudo[178751]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:58:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:33.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:58:33 compute-0 ceph-mon[73551]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=sqlstore.transactions t=2025-10-10T09:58:34.722146165Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=cleanup t=2025-10-10T09:58:34.728544018Z level=info msg="Completed cleanup jobs" duration=17.579387ms
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=sqlstore.transactions t=2025-10-10T09:58:34.733186505Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugins.update.checker t=2025-10-10T09:58:34.851376381Z level=info msg="Update check succeeded" duration=50.189ms
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana.update.checker t=2025-10-10T09:58:34.88102684Z level=info msg="Update check succeeded" duration=45.346877ms
Oct 10 09:58:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0048a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:35 compute-0 podman[179887]: 2025-10-10 09:58:35.246896624 +0000 UTC m=+0.090779927 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 09:58:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:58:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:58:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:58:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:58:36 compute-0 ceph-mon[73551]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:58:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:37 compute-0 ceph-mon[73551]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 09:58:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 09:58:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0048c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:58:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:37.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:58:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:39 compute-0 ceph-mon[73551]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c0048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:41 compute-0 ceph-mon[73551]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:58:41.878 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:58:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:58:41.878 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:58:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:58:41.878 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:58:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:42 compute-0 podman[183974]: 2025-10-10 09:58:42.207376689 +0000 UTC m=+0.054081057 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 10 09:58:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:42 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:43 compute-0 ceph-mon[73551]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:58:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:58:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:44 compute-0 sudo[185512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:58:44 compute-0 sudo[185512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:44 compute-0 sudo[185512]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:44 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:44 compute-0 sudo[185576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:58:44 compute-0 sudo[185576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:45 compute-0 sudo[185576]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 09:58:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:45 compute-0 ceph-mon[73551]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:45.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:58:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:58:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500033a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:47.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:58:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:47.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 09:58:47 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 09:58:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:47] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:58:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:47] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:58:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:47 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:48 compute-0 ceph-mon[73551]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:48 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 09:58:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 09:58:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 09:58:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:48 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 09:58:49 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:58:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 09:58:49 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:58:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 09:58:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:49 compute-0 ceph-mon[73551]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:49.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 09:58:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:50 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6400b710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 09:58:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:58:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 09:58:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:58:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 09:58:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:51 compute-0 sudo[187489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:58:51 compute-0 sudo[187489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:51 compute-0 sudo[187489]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:51 compute-0 sudo[187514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:58:51 compute-0 sudo[187514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:58:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:51.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:51 compute-0 podman[187583]: 2025-10-10 09:58:51.902929553 +0000 UTC m=+0.030026931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:58:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:52 compute-0 podman[187583]: 2025-10-10 09:58:52.034208289 +0000 UTC m=+0.161305647 container create 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:58:52 compute-0 systemd[1]: Started libpod-conmon-5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa.scope.
Oct 10 09:58:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:58:52 compute-0 podman[187583]: 2025-10-10 09:58:52.260834807 +0000 UTC m=+0.387932255 container init 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:58:52 compute-0 podman[187583]: 2025-10-10 09:58:52.269637314 +0000 UTC m=+0.396734702 container start 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:58:52 compute-0 musing_diffie[187603]: 167 167
Oct 10 09:58:52 compute-0 systemd[1]: libpod-5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa.scope: Deactivated successfully.
Oct 10 09:58:52 compute-0 podman[187583]: 2025-10-10 09:58:52.411816475 +0000 UTC m=+0.538914023 container attach 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:58:52 compute-0 podman[187583]: 2025-10-10 09:58:52.413559413 +0000 UTC m=+0.540656831 container died 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:58:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:52 compute-0 ceph-mon[73551]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:58:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:58:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f145b100a3fb38e3175a7c51241c78dc550f8bc5074b54ae44418039986f77-merged.mount: Deactivated successfully.
Oct 10 09:58:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:52 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:53 compute-0 podman[187583]: 2025-10-10 09:58:53.015842613 +0000 UTC m=+1.142939951 container remove 5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_diffie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:58:53 compute-0 systemd[1]: libpod-conmon-5a49f521e2c730991c5573f5cca395022a415a64fd244ca3a1a66e44134a32fa.scope: Deactivated successfully.
Oct 10 09:58:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:53 compute-0 podman[187629]: 2025-10-10 09:58:53.172525558 +0000 UTC m=+0.026357471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:58:53 compute-0 podman[187629]: 2025-10-10 09:58:53.331991274 +0000 UTC m=+0.185823177 container create 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:58:53 compute-0 sudo[187643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:53 compute-0 sudo[187643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:53 compute-0 sudo[187643]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:53 compute-0 systemd[1]: Started libpod-conmon-4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435.scope.
Oct 10 09:58:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:53 compute-0 podman[187629]: 2025-10-10 09:58:53.765050102 +0000 UTC m=+0.618882085 container init 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 09:58:53 compute-0 podman[187629]: 2025-10-10 09:58:53.781425656 +0000 UTC m=+0.635257579 container start 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:58:53 compute-0 podman[187629]: 2025-10-10 09:58:53.801529352 +0000 UTC m=+0.655361365 container attach 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:58:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:54.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:54 compute-0 ceph-mon[73551]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:54 compute-0 youthful_fermi[187670]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:58:54 compute-0 youthful_fermi[187670]: --> All data devices are unavailable
Oct 10 09:58:54 compute-0 systemd[1]: libpod-4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435.scope: Deactivated successfully.
Oct 10 09:58:54 compute-0 podman[187629]: 2025-10-10 09:58:54.215849728 +0000 UTC m=+1.069681701 container died 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:58:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:54 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e987c87b7f34624567c286765b72043b36ffd94b99e73e6f1c4742df3aec1f41-merged.mount: Deactivated successfully.
Oct 10 09:58:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:55 compute-0 podman[187629]: 2025-10-10 09:58:55.111303029 +0000 UTC m=+1.965134932 container remove 4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:58:55 compute-0 ceph-mon[73551]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:55 compute-0 sudo[187514]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:55 compute-0 systemd[1]: libpod-conmon-4a6c08954c45e984d82f314642450ac6912c6bfc8b1f062872a0dfcb54b0b435.scope: Deactivated successfully.
Oct 10 09:58:55 compute-0 sudo[187710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:58:55 compute-0 sudo[187710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:55 compute-0 sudo[187710]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:55 compute-0 sudo[187735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:58:55 compute-0 sudo[187735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:55 compute-0 podman[187802]: 2025-10-10 09:58:55.865116607 +0000 UTC m=+0.116552826 container create 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 09:58:55 compute-0 podman[187802]: 2025-10-10 09:58:55.778706617 +0000 UTC m=+0.030142866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:58:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:55 compute-0 systemd[1]: Started libpod-conmon-7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8.scope.
Oct 10 09:58:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:58:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:56.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:56 compute-0 podman[187802]: 2025-10-10 09:58:56.159236709 +0000 UTC m=+0.410673018 container init 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:58:56 compute-0 podman[187802]: 2025-10-10 09:58:56.1712365 +0000 UTC m=+0.422672729 container start 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:58:56 compute-0 infallible_jennings[187818]: 167 167
Oct 10 09:58:56 compute-0 systemd[1]: libpod-7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8.scope: Deactivated successfully.
Oct 10 09:58:56 compute-0 podman[187802]: 2025-10-10 09:58:56.238680742 +0000 UTC m=+0.490116961 container attach 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:58:56 compute-0 podman[187802]: 2025-10-10 09:58:56.2392357 +0000 UTC m=+0.490671919 container died 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 09:58:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:56 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:57.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:58:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:58:57.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:58:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:57] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:58:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:58:57] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:58:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:58:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:58 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-df98849b725a14cc0767a7ef3e02422959a4910e5f4e4d964b7f30da3ac3f41d-merged.mount: Deactivated successfully.
Oct 10 09:58:59 compute-0 ceph-mon[73551]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:59 compute-0 podman[187802]: 2025-10-10 09:58:59.26421311 +0000 UTC m=+3.515649329 container remove 7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 09:58:59 compute-0 systemd[1]: libpod-conmon-7079d11234a8382fca4dacd13131c81d5589032d4bb4016f9e08201aa84920f8.scope: Deactivated successfully.
Oct 10 09:58:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:58:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:59 compute-0 podman[187847]: 2025-10-10 09:58:59.41800092 +0000 UTC m=+0.024503612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:58:59 compute-0 podman[187847]: 2025-10-10 09:58:59.547662682 +0000 UTC m=+0.154165344 container create d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:58:59 compute-0 systemd[1]: Started libpod-conmon-d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3.scope.
Oct 10 09:58:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df17aac0c190456a7d9ecafbdc6c65fd073bdd76f75a9a9e091a8a76dcb9e12f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df17aac0c190456a7d9ecafbdc6c65fd073bdd76f75a9a9e091a8a76dcb9e12f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df17aac0c190456a7d9ecafbdc6c65fd073bdd76f75a9a9e091a8a76dcb9e12f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df17aac0c190456a7d9ecafbdc6c65fd073bdd76f75a9a9e091a8a76dcb9e12f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:58:59 compute-0 podman[187847]: 2025-10-10 09:58:59.700597465 +0000 UTC m=+0.307100157 container init d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:58:59 compute-0 podman[187847]: 2025-10-10 09:58:59.712843994 +0000 UTC m=+0.319346696 container start d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 09:58:59 compute-0 podman[187847]: 2025-10-10 09:58:59.72465069 +0000 UTC m=+0.331153452 container attach d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Oct 10 09:58:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:58:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:58:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]: {
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:     "0": [
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:         {
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "devices": [
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "/dev/loop3"
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             ],
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "lv_name": "ceph_lv0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "lv_size": "21470642176",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "name": "ceph_lv0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "tags": {
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.cluster_name": "ceph",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.crush_device_class": "",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.encrypted": "0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.osd_id": "0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.type": "block",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.vdo": "0",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:                 "ceph.with_tpm": "0"
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             },
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "type": "block",
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:             "vg_name": "ceph_vg0"
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:         }
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]:     ]
Oct 10 09:59:00 compute-0 xenodochial_payne[187863]: }
Oct 10 09:59:00 compute-0 systemd[1]: libpod-d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3.scope: Deactivated successfully.
Oct 10 09:59:00 compute-0 podman[187847]: 2025-10-10 09:59:00.035747686 +0000 UTC m=+0.642250348 container died d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:59:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-df17aac0c190456a7d9ecafbdc6c65fd073bdd76f75a9a9e091a8a76dcb9e12f-merged.mount: Deactivated successfully.
Oct 10 09:59:00 compute-0 podman[187847]: 2025-10-10 09:59:00.166788033 +0000 UTC m=+0.773290705 container remove d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:59:00 compute-0 systemd[1]: libpod-conmon-d56d3744b68ff5f21cc69e3e46d37d2b5323d14d7218a7bdc147c27916b65bd3.scope: Deactivated successfully.
Oct 10 09:59:00 compute-0 ceph-mon[73551]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:00 compute-0 sudo[187735]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:00 compute-0 sudo[187887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:59:00 compute-0 sudo[187887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:00 compute-0 sudo[187887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:00 compute-0 sudo[187912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:59:00 compute-0 sudo[187912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:00 compute-0 podman[187977]: 2025-10-10 09:59:00.845569552 +0000 UTC m=+0.080630564 container create e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:59:00 compute-0 podman[187977]: 2025-10-10 09:59:00.788378345 +0000 UTC m=+0.023439357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:59:00 compute-0 systemd[1]: Started libpod-conmon-e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430.scope.
Oct 10 09:59:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:00 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:59:01 compute-0 podman[187977]: 2025-10-10 09:59:01.070838556 +0000 UTC m=+0.305899558 container init e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:59:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:01 compute-0 podman[187977]: 2025-10-10 09:59:01.082740744 +0000 UTC m=+0.317801716 container start e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:59:01 compute-0 crazy_cohen[187995]: 167 167
Oct 10 09:59:01 compute-0 systemd[1]: libpod-e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430.scope: Deactivated successfully.
Oct 10 09:59:01 compute-0 conmon[187995]: conmon e1d57385db8e3f623b98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430.scope/container/memory.events
Oct 10 09:59:01 compute-0 podman[187977]: 2025-10-10 09:59:01.127976001 +0000 UTC m=+0.363037013 container attach e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:59:01 compute-0 podman[187977]: 2025-10-10 09:59:01.128475838 +0000 UTC m=+0.363536820 container died e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c30ae664a9deb2745f6120b6230e23bef182cba7e8be6ab72b2b034004918d9f-merged.mount: Deactivated successfully.
Oct 10 09:59:01 compute-0 ceph-mon[73551]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:01 compute-0 podman[187977]: 2025-10-10 09:59:01.245490007 +0000 UTC m=+0.480550989 container remove e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:59:01 compute-0 systemd[1]: libpod-conmon-e1d57385db8e3f623b988cb4a334f51ffca139ccbed8b259720f497d9bf74430.scope: Deactivated successfully.
Oct 10 09:59:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:59:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:01 compute-0 podman[188021]: 2025-10-10 09:59:01.443668137 +0000 UTC m=+0.053164167 container create 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:59:01 compute-0 systemd[1]: Started libpod-conmon-365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781.scope.
Oct 10 09:59:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 09:59:01 compute-0 podman[188021]: 2025-10-10 09:59:01.422746383 +0000 UTC m=+0.032242383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb849d8211073944878850cc2dbeaf6a074ae026d4ba4b37091fc54c3cc2fcb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb849d8211073944878850cc2dbeaf6a074ae026d4ba4b37091fc54c3cc2fcb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb849d8211073944878850cc2dbeaf6a074ae026d4ba4b37091fc54c3cc2fcb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb849d8211073944878850cc2dbeaf6a074ae026d4ba4b37091fc54c3cc2fcb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:59:01 compute-0 podman[188021]: 2025-10-10 09:59:01.548556201 +0000 UTC m=+0.158052301 container init 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:59:01 compute-0 podman[188021]: 2025-10-10 09:59:01.555334562 +0000 UTC m=+0.164830612 container start 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:59:01 compute-0 podman[188021]: 2025-10-10 09:59:01.559156287 +0000 UTC m=+0.168652387 container attach 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:59:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:01.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:02 compute-0 lvm[188115]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:59:02 compute-0 lvm[188115]: VG ceph_vg0 finished
Oct 10 09:59:02 compute-0 lvm[188119]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:59:02 compute-0 lvm[188119]: VG ceph_vg0 finished
Oct 10 09:59:02 compute-0 hungry_villani[188038]: {}
Oct 10 09:59:02 compute-0 systemd[1]: libpod-365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781.scope: Deactivated successfully.
Oct 10 09:59:02 compute-0 podman[188021]: 2025-10-10 09:59:02.355390579 +0000 UTC m=+0.964886639 container died 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:59:02 compute-0 systemd[1]: libpod-365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781.scope: Consumed 1.221s CPU time.
Oct 10 09:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb849d8211073944878850cc2dbeaf6a074ae026d4ba4b37091fc54c3cc2fcb7-merged.mount: Deactivated successfully.
Oct 10 09:59:02 compute-0 podman[188021]: 2025-10-10 09:59:02.412622058 +0000 UTC m=+1.022118098 container remove 365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_villani, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:59:02 compute-0 systemd[1]: libpod-conmon-365109cc23d0f2e6e9be81d95cb11f621019b8256c3eaf60b2bbadc18917b781.scope: Deactivated successfully.
Oct 10 09:59:02 compute-0 sudo[187912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 09:59:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 09:59:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:02 compute-0 sudo[188132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:59:02 compute-0 sudo[188132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:02 compute-0 sudo[188132]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:02 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:03 compute-0 ceph-mon[73551]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:04.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:04 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0047d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:05 compute-0 kernel: SELinux:  Converting 2773 SID table entries...
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:59:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:59:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:05 compute-0 ceph-mon[73551]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:05.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:06 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 10 09:59:06 compute-0 podman[188167]: 2025-10-10 09:59:06.277422552 +0000 UTC m=+0.116953889 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 09:59:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:06 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:07.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:59:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:07 compute-0 ceph-mon[73551]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0047f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:07 compute-0 groupadd[188200]: group added to /etc/group: name=dnsmasq, GID=991
Oct 10 09:59:07 compute-0 groupadd[188200]: group added to /etc/gshadow: name=dnsmasq
Oct 10 09:59:07 compute-0 groupadd[188200]: new group: name=dnsmasq, GID=991
Oct 10 09:59:07 compute-0 useradd[188207]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 10 09:59:07 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:59:07 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Oct 10 09:59:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:07.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:08.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:08 compute-0 groupadd[188222]: group added to /etc/group: name=clevis, GID=990
Oct 10 09:59:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:08 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:08 compute-0 groupadd[188222]: group added to /etc/gshadow: name=clevis
Oct 10 09:59:08 compute-0 groupadd[188222]: new group: name=clevis, GID=990
Oct 10 09:59:09 compute-0 useradd[188229]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 10 09:59:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:09 compute-0 usermod[188239]: add 'clevis' to group 'tss'
Oct 10 09:59:09 compute-0 usermod[188239]: add 'clevis' to shadow group 'tss'
Oct 10 09:59:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:09 compute-0 ceph-mon[73551]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:09.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:10 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:11 compute-0 ceph-mon[73551]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:11 compute-0 polkitd[6931]: Reloading rules
Oct 10 09:59:11 compute-0 polkitd[6931]: Collecting garbage unconditionally...
Oct 10 09:59:11 compute-0 polkitd[6931]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:59:11 compute-0 polkitd[6931]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:59:11 compute-0 polkitd[6931]: Finished loading, compiling and executing 4 rules
Oct 10 09:59:11 compute-0 polkitd[6931]: Reloading rules
Oct 10 09:59:11 compute-0 polkitd[6931]: Collecting garbage unconditionally...
Oct 10 09:59:11 compute-0 polkitd[6931]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:59:11 compute-0 polkitd[6931]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:59:11 compute-0 polkitd[6931]: Finished loading, compiling and executing 4 rules
Oct 10 09:59:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:11.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:12.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:12 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:13 compute-0 podman[188419]: 2025-10-10 09:59:13.225257692 +0000 UTC m=+0.068589261 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 10 09:59:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:13 compute-0 sudo[188448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:13 compute-0 sudo[188448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:13 compute-0 sudo[188448]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:13 compute-0 groupadd[188474]: group added to /etc/group: name=ceph, GID=167
Oct 10 09:59:13 compute-0 groupadd[188474]: group added to /etc/gshadow: name=ceph
Oct 10 09:59:13 compute-0 groupadd[188474]: new group: name=ceph, GID=167
Oct 10 09:59:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:13.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:14 compute-0 useradd[188480]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 10 09:59:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:14.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:14 compute-0 ceph-mon[73551]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:14 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:15 compute-0 ceph-mon[73551]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:59:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_09:59:16
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.log', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'images']
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 09:59:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:59:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:59:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 09:59:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 09:59:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:16 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004890 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:17.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:59:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:17.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:59:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:17] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:59:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:17] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:59:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:17 compute-0 ceph-mon[73551]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:17.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:18 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 10 09:59:18 compute-0 sshd[1007]: Received signal 15; terminating.
Oct 10 09:59:18 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 10 09:59:18 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 10 09:59:18 compute-0 systemd[1]: sshd.service: Consumed 2.654s CPU time, read 532.0K from disk, written 0B to disk.
Oct 10 09:59:18 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 10 09:59:18 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 10 09:59:18 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:18 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:18 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:18 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 10 09:59:18 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 10 09:59:18 compute-0 sshd[189180]: Server listening on 0.0.0.0 port 22.
Oct 10 09:59:18 compute-0 sshd[189180]: Server listening on :: port 22.
Oct 10 09:59:18 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 10 09:59:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:18 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0048b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:19 compute-0 ceph-mon[73551]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:19.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:59:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:59:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:20 compute-0 systemd[1]: Reloading.
Oct 10 09:59:20 compute-0 systemd-rc-local-generator[189440]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:20 compute-0 systemd-sysv-generator[189443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:59:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:20 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0048d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:22.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:22 compute-0 ceph-mon[73551]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:22 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 10 09:59:22 compute-0 PackageKit[191551]: daemon start
Oct 10 09:59:22 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 10 09:59:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:22 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:23 compute-0 sudo[169447]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:23 compute-0 ceph-mon[73551]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:23.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:24.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:24 compute-0 sudo[192966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agitywnrwhuwtkykpotyvbztylvbjxmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090363.5713701-968-193644323369362/AnsiballZ_systemd.py'
Oct 10 09:59:24 compute-0 sudo[192966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:24 compute-0 python3.9[193002]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:24 compute-0 systemd[1]: Reloading.
Oct 10 09:59:24 compute-0 systemd-rc-local-generator[193490]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:24 compute-0 systemd-sysv-generator[193493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:24 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:25 compute-0 sudo[192966]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:25 compute-0 sudo[194354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjabneqicmjmvhovptkgcxgmfyxomtzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090365.1644533-968-217255312877445/AnsiballZ_systemd.py'
Oct 10 09:59:25 compute-0 sudo[194354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:25 compute-0 ceph-mon[73551]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:25 compute-0 python3.9[194379]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:25 compute-0 systemd[1]: Reloading.
Oct 10 09:59:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:25.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:26 compute-0 systemd-sysv-generator[194814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:26 compute-0 systemd-rc-local-generator[194809]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:26.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:26 compute-0 sudo[194354]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:26 compute-0 sudo[195560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umasnkzwnvnggfbvtmklqpmtonuzfzps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090366.4341235-968-178979161603841/AnsiballZ_systemd.py'
Oct 10 09:59:26 compute-0 sudo[195560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:26 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:27 compute-0 python3.9[195581]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:27.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:59:27 compute-0 systemd[1]: Reloading.
Oct 10 09:59:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c0049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:27 compute-0 systemd-rc-local-generator[195975]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:27 compute-0 systemd-sysv-generator[195981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:27] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:59:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:27] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 09:59:27 compute-0 sudo[195560]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:27 compute-0 ceph-mon[73551]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:27 compute-0 sudo[196748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphawhqjymshttnublkgpparrtjsvnez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090367.5773838-968-54192914624835/AnsiballZ_systemd.py'
Oct 10 09:59:27 compute-0 sudo[196748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:27.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:28 compute-0 python3.9[196772]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:28 compute-0 systemd[1]: Reloading.
Oct 10 09:59:28 compute-0 systemd-sysv-generator[197230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:28 compute-0 systemd-rc-local-generator[197225]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:28 compute-0 sudo[196748]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:28 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:29 compute-0 sudo[198032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnybiavpidyglzqjpkwdhcwcqnbysdjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090368.8617494-1055-93323835424119/AnsiballZ_systemd.py'
Oct 10 09:59:29 compute-0 sudo[198032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:29 compute-0 python3.9[198055]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:29 compute-0 systemd[1]: Reloading.
Oct 10 09:59:29 compute-0 ceph-mon[73551]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:29 compute-0 systemd-sysv-generator[198579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:29 compute-0 systemd-rc-local-generator[198575]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:59:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:59:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.757s CPU time.
Oct 10 09:59:30 compute-0 systemd[1]: run-r71ee9857778841b1b82faff533b3e840.service: Deactivated successfully.
Oct 10 09:59:30 compute-0 sudo[198032]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:30 compute-0 sudo[198807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwsqqummizwihekezhssdrqjcizeykwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090370.1562202-1055-198370886297283/AnsiballZ_systemd.py'
Oct 10 09:59:30 compute-0 sudo[198807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:30 compute-0 python3.9[198809]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:30 compute-0 systemd[1]: Reloading.
Oct 10 09:59:30 compute-0 systemd-rc-local-generator[198837]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:30 compute-0 systemd-sysv-generator[198841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:30 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:31 compute-0 sudo[198807]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:59:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:31 compute-0 sudo[198998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefmkfumguejkteljrbdkogadzvtnjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090371.35177-1055-21791105362505/AnsiballZ_systemd.py'
Oct 10 09:59:31 compute-0 sudo[198998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:31 compute-0 ceph-mon[73551]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:31.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:32 compute-0 python3.9[199000]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:32 compute-0 systemd[1]: Reloading.
Oct 10 09:59:32 compute-0 systemd-sysv-generator[199030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:32 compute-0 systemd-rc-local-generator[199027]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:32 compute-0 sudo[198998]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64002550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:33 compute-0 sudo[199192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrszsgbmvvbgatifpmuzkzjhhozkfwkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090372.6942105-1055-251068067788357/AnsiballZ_systemd.py'
Oct 10 09:59:33 compute-0 sudo[199192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:33 compute-0 python3.9[199194]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:33 compute-0 sudo[199192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:33 compute-0 sudo[199228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:33 compute-0 sudo[199228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:33 compute-0 sudo[199228]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:33 compute-0 ceph-mon[73551]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:33 compute-0 sudo[199372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khihpxpntvflxsvpucaqdgxdmzxgaqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090373.6753364-1055-76437296863455/AnsiballZ_systemd.py'
Oct 10 09:59:33 compute-0 sudo[199372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:33.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:34 compute-0 python3.9[199374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:34 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:35 compute-0 systemd[1]: Reloading.
Oct 10 09:59:35 compute-0 systemd-rc-local-generator[199408]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:35 compute-0 systemd-sysv-generator[199412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:35 compute-0 sudo[199372]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:35 compute-0 ceph-mon[73551]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:35.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:36 compute-0 sudo[199566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtdlhpstzjavvqgnbxbnfcyxzsjvcka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090375.931256-1163-173272278438135/AnsiballZ_systemd.py'
Oct 10 09:59:36 compute-0 sudo[199566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:36 compute-0 python3.9[199568]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:36 compute-0 podman[199570]: 2025-10-10 09:59:36.766050291 +0000 UTC m=+0.129205549 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:59:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:36 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:37.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:59:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 09:59:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 09:59:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:37 compute-0 systemd[1]: Reloading.
Oct 10 09:59:37 compute-0 systemd-sysv-generator[199626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:37 compute-0 systemd-rc-local-generator[199622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:37 compute-0 ceph-mon[73551]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:37.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:38 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 10 09:59:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:38 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 10 09:59:38 compute-0 sudo[199566]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:38 compute-0 sudo[199787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprhqbacilajztknmqxvplgugqvyzttu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090378.3966265-1187-213972519800475/AnsiballZ_systemd.py'
Oct 10 09:59:38 compute-0 sudo[199787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:38 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:39 compute-0 python3.9[199789]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e5c004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:39 compute-0 sudo[199787]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:39 compute-0 sudo[199943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbypmmjegohsvvcmiklxktdfhezrdzah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090379.3721514-1187-165486387281502/AnsiballZ_systemd.py'
Oct 10 09:59:39 compute-0 sudo[199943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:39 compute-0 ceph-mon[73551]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:39.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:40 compute-0 python3.9[199945]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:40.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:40 compute-0 sudo[199943]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:40 compute-0 auditd[704]: Audit daemon rotating log files
Oct 10 09:59:40 compute-0 sudo[200100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkemvlnpwnwblcfueyxusbaqhockoruh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090380.5162895-1187-144153498914026/AnsiballZ_systemd.py'
Oct 10 09:59:40 compute-0 sudo[200100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:41 compute-0 python3.9[200102]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:41 compute-0 sudo[200100]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:41 compute-0 sudo[200255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghpngwaybcjccnyqfwbnqvezidthabfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090381.366786-1187-223747501349149/AnsiballZ_systemd.py'
Oct 10 09:59:41 compute-0 sudo[200255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:41 compute-0 ceph-mon[73551]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:59:41.879 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:59:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:59:41.880 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:59:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 09:59:41.880 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:59:41 compute-0 python3.9[200257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:42 compute-0 sudo[200255]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:42.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:42 compute-0 sudo[200411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcygpwkqvbsimhxjbvmffkugvhwkizu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090382.2087724-1187-145413213695582/AnsiballZ_systemd.py'
Oct 10 09:59:42 compute-0 sudo[200411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:42 compute-0 python3.9[200413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e700014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:43 compute-0 sudo[200411]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:43 compute-0 sudo[200578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyypxlvbylidfovxjzxklhzooobttxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090383.144651-1187-121215785702264/AnsiballZ_systemd.py'
Oct 10 09:59:43 compute-0 sudo[200578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:43 compute-0 podman[200541]: 2025-10-10 09:59:43.526921167 +0000 UTC m=+0.089103620 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:59:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:43 compute-0 python3.9[200584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:43 compute-0 ceph-mon[73551]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:43 compute-0 sudo[200578]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.003000099s ======
Oct 10 09:59:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:44.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000099s
Oct 10 09:59:44 compute-0 sudo[200743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykgfasbqgdlwhvaedvyzmkjxpvmyuur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090384.0588582-1187-115726592543577/AnsiballZ_systemd.py'
Oct 10 09:59:44 compute-0 sudo[200743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:44 compute-0 python3.9[200745]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:44 compute-0 sudo[200743]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:45 compute-0 sudo[200899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uttwqoddnexfoqeggldndxotkkehzjgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090384.9744523-1187-80036252645704/AnsiballZ_systemd.py'
Oct 10 09:59:45 compute-0 sudo[200899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:45 compute-0 python3.9[200901]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:45 compute-0 sudo[200899]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:45 compute-0 ceph-mon[73551]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:46.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:46 compute-0 sudo[201055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqohwiwkhlnsobgjxlmykfzenlhjzyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090385.909819-1187-195444702496912/AnsiballZ_systemd.py'
Oct 10 09:59:46 compute-0 sudo[201055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 09:59:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 09:59:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:46 compute-0 python3.9[201057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:46 compute-0 sudo[201055]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 09:59:47 compute-0 sudo[201211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgcmkfqmkgiurvcryfrhgijbuienwlen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090386.7810898-1187-275174643705585/AnsiballZ_systemd.py'
Oct 10 09:59:47 compute-0 sudo[201211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:47 compute-0 python3.9[201213]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:47 compute-0 sudo[201211]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:47 compute-0 ceph-mon[73551]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:47 compute-0 sudo[201366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgmlmqxijsldzhzkumktxtpqllxwzqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090387.609868-1187-163366079385968/AnsiballZ_systemd.py'
Oct 10 09:59:47 compute-0 sudo[201366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:59:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:59:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:48.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:48 compute-0 python3.9[201368]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:48 compute-0 sudo[201366]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:48 compute-0 sudo[201523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwdsndbxfhtuoenebbyzlbgvdwoystaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090388.519084-1187-156586868901764/AnsiballZ_systemd.py'
Oct 10 09:59:48 compute-0 sudo[201523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:49 compute-0 python3.9[201525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:49 compute-0 sudo[201523]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:49 compute-0 sudo[201678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykdazcphxlpylynwlpeyldjiqlcwevsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090389.4387662-1187-184448034885306/AnsiballZ_systemd.py'
Oct 10 09:59:49 compute-0 sudo[201678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:49 compute-0 ceph-mon[73551]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:50.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:50 compute-0 python3.9[201680]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:59:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:59:50 compute-0 sudo[201678]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:50 compute-0 sudo[201834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airxexrbdboccgamdzoxwplpexiytgpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090390.2824926-1187-244325388680022/AnsiballZ_systemd.py'
Oct 10 09:59:50 compute-0 sudo[201834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:50 compute-0 python3.9[201836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:51 compute-0 sudo[201834]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:51 compute-0 ceph-mon[73551]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 09:59:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:52.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 09:59:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:59:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:59:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:52 compute-0 sudo[201991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxbrboysaqnirpujrlffujwuldfvezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090392.4803762-1493-35967424518707/AnsiballZ_file.py'
Oct 10 09:59:52 compute-0 sudo[201991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:53 compute-0 python3.9[201994]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e700014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:53 compute-0 sudo[201991]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:53 compute-0 sudo[202144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypgzpifrqxthdhqzfydzhlovoflhklzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090393.18924-1493-30811222226414/AnsiballZ_file.py'
Oct 10 09:59:53 compute-0 sudo[202144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:53 compute-0 python3.9[202146]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:53 compute-0 sudo[202144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:53 compute-0 sudo[202170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:53 compute-0 sudo[202170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:53 compute-0 sudo[202170]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:54 compute-0 ceph-mon[73551]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:54 compute-0 sudo[202322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntywgtfadswbyfrzncndbiquiaktohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090393.8114448-1493-97388447711339/AnsiballZ_file.py'
Oct 10 09:59:54 compute-0 sudo[202322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:54 compute-0 python3.9[202324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:54 compute-0 sudo[202322]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:54 compute-0 sudo[202475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfamxbnwbuhuonlopnmbverkblirresd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090394.526416-1493-78504488108405/AnsiballZ_file.py'
Oct 10 09:59:54 compute-0 sudo[202475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:55 compute-0 python3.9[202477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:55 compute-0 sudo[202475]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e700014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:55 compute-0 sudo[202627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysyxbbztnrkkhkauhcmrkneupktyjzjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090395.1945121-1493-279504554539659/AnsiballZ_file.py'
Oct 10 09:59:55 compute-0 sudo[202627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:55 compute-0 python3.9[202629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:55 compute-0 sudo[202627]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:56.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:56 compute-0 ceph-mon[73551]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:56.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:56 compute-0 sudo[202780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgrtksobasinnjiixkjvmfgqizgxrzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090395.8293602-1493-274526314953174/AnsiballZ_file.py'
Oct 10 09:59:56 compute-0 sudo[202780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:56 compute-0 python3.9[202782]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:56 compute-0 sudo[202780]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T09:59:57.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 09:59:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:57 compute-0 sudo[202933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omolmkumanqanophjutigdbdjqifpmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090396.7493293-1622-197000923659921/AnsiballZ_stat.py'
Oct 10 09:59:57 compute-0 sudo[202933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:09:59:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 09:59:57 compute-0 python3.9[202935]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:59:57 compute-0 sudo[202933]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e700014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 09:59:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:58.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 09:59:58 compute-0 sudo[203058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnknhjksbcrsqptkaanlylurbmjwhunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090396.7493293-1622-197000923659921/AnsiballZ_copy.py'
Oct 10 09:59:58 compute-0 sudo[203058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:58 compute-0 ceph-mon[73551]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 09:59:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:58 compute-0 python3.9[203060]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090396.7493293-1622-197000923659921/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:59:58 compute-0 sudo[203058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:58 compute-0 sudo[203211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnewpjqywqbczaxvrndbrzprvjpadxka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090398.4097145-1622-190897471929926/AnsiballZ_stat.py'
Oct 10 09:59:58 compute-0 sudo[203211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:58 compute-0 python3.9[203213]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:59:58 compute-0 sudo[203211]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e700014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:59 compute-0 sudo[203337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycmooqufvhckhlyxlupaqhczjcdwccbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090398.4097145-1622-190897471929926/AnsiballZ_copy.py'
Oct 10 09:59:59 compute-0 sudo[203337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 09:59:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:59 compute-0 python3.9[203339]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090398.4097145-1622-190897471929926/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:59:59 compute-0 sudo[203337]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 10:00:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:00 compute-0 ceph-mon[73551]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:00:00 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 10:00:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:00 compute-0 sudo[203490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhhmjvhizcujacuwkaewfegbipnldar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090399.8073783-1622-94154661182969/AnsiballZ_stat.py'
Oct 10 10:00:00 compute-0 sudo[203490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:00 compute-0 python3.9[203492]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:00 compute-0 sudo[203490]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:00 compute-0 sudo[203615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzossyqsdcjqzubqgvptyjvgolshtoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090399.8073783-1622-94154661182969/AnsiballZ_copy.py'
Oct 10 10:00:00 compute-0 sudo[203615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:00 compute-0 python3.9[203617]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090399.8073783-1622-94154661182969/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:01 compute-0 sudo[203615]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:01 compute-0 ceph-mon[73551]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:00:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:01 compute-0 sudo[203768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtbutreqxpjpkqwhfbmebovjmfbqfsku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090401.1604261-1622-110599974309041/AnsiballZ_stat.py'
Oct 10 10:00:01 compute-0 sudo[203768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:01 compute-0 python3.9[203770]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:01 compute-0 sudo[203768]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:02.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:02 compute-0 sudo[203894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipiwejzalyttjsclhmfgoblxnqbuebju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090401.1604261-1622-110599974309041/AnsiballZ_copy.py'
Oct 10 10:00:02 compute-0 sudo[203894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:02 compute-0 python3.9[203896]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090401.1604261-1622-110599974309041/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:02 compute-0 sudo[203894]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:02 compute-0 sudo[204047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jusyixmxzqtzdstzfhdgfkrggtfvfcma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090402.4883382-1622-176170558986636/AnsiballZ_stat.py'
Oct 10 10:00:02 compute-0 sudo[204047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:02 compute-0 sudo[204049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:02 compute-0 sudo[204049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:02 compute-0 sudo[204049]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:02 compute-0 sudo[204075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:00:02 compute-0 sudo[204075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:03 compute-0 python3.9[204052]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:03 compute-0 ceph-mon[73551]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:03 compute-0 sudo[204047]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:03 compute-0 sudo[204288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dekpdillchiaoqsevpgipwqtegbtjlqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090402.4883382-1622-176170558986636/AnsiballZ_copy.py'
Oct 10 10:00:03 compute-0 sudo[204288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:03 compute-0 podman[204297]: 2025-10-10 10:00:03.643027009 +0000 UTC m=+0.091502085 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 10:00:03 compute-0 python3.9[204296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090402.4883382-1622-176170558986636/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:03 compute-0 podman[204297]: 2025-10-10 10:00:03.757920282 +0000 UTC m=+0.206395378 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:00:03 compute-0 sudo[204288]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:04.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:04.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:04 compute-0 sudo[204563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzmvnlvviwwqbcaqdwmequtztnduzyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090403.941088-1622-751601469205/AnsiballZ_stat.py'
Oct 10 10:00:04 compute-0 sudo[204563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:04 compute-0 podman[204559]: 2025-10-10 10:00:04.26864537 +0000 UTC m=+0.058145189 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:04 compute-0 podman[204559]: 2025-10-10 10:00:04.278017427 +0000 UTC m=+0.067517266 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:04 compute-0 python3.9[204575]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:04 compute-0 sudo[204563]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:04 compute-0 podman[204682]: 2025-10-10 10:00:04.633878079 +0000 UTC m=+0.056617942 container exec c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:00:04 compute-0 podman[204682]: 2025-10-10 10:00:04.651789774 +0000 UTC m=+0.074529587 container exec_died c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:00:04 compute-0 sudo[204848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnnbtdwebfzdblkkdwpkjawdnlpjampk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090403.941088-1622-751601469205/AnsiballZ_copy.py'
Oct 10 10:00:04 compute-0 sudo[204848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:04 compute-0 podman[204837]: 2025-10-10 10:00:04.878151081 +0000 UTC m=+0.052046246 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:00:04 compute-0 podman[204837]: 2025-10-10 10:00:04.903743281 +0000 UTC m=+0.077638426 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:00:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:05 compute-0 python3.9[204858]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090403.941088-1622-751601469205/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:05 compute-0 sudo[204848]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:05 compute-0 podman[204910]: 2025-10-10 10:00:05.129878041 +0000 UTC m=+0.060749252 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.buildah.version=1.28.2, name=keepalived, vcs-type=git, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Oct 10 10:00:05 compute-0 podman[204910]: 2025-10-10 10:00:05.208771555 +0000 UTC m=+0.139642746 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived)
Oct 10 10:00:05 compute-0 podman[205059]: 2025-10-10 10:00:05.431736356 +0000 UTC m=+0.057512810 container exec e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:05 compute-0 podman[205059]: 2025-10-10 10:00:05.475493779 +0000 UTC m=+0.101270223 container exec_died e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:05 compute-0 sudo[205169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfxqfeiyszwazdizqwpbyoythikauff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090405.2470043-1622-81698189528143/AnsiballZ_stat.py'
Oct 10 10:00:05 compute-0 sudo[205169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:05 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:05 compute-0 ceph-mon[73551]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:05 compute-0 podman[205200]: 2025-10-10 10:00:05.725020379 +0000 UTC m=+0.067762834 container exec 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:00:05 compute-0 python3.9[205173]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:05 compute-0 sudo[205169]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:05 compute-0 podman[205200]: 2025-10-10 10:00:05.931952671 +0000 UTC m=+0.274695106 container exec_died 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:00:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:06.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:06 compute-0 sudo[205402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcyxgstuiyckebzirbhntewptwovwhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090405.2470043-1622-81698189528143/AnsiballZ_copy.py'
Oct 10 10:00:06 compute-0 sudo[205402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:06 compute-0 podman[205435]: 2025-10-10 10:00:06.356708421 +0000 UTC m=+0.063412695 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:06 compute-0 python3.9[205406]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090405.2470043-1622-81698189528143/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:06 compute-0 podman[205435]: 2025-10-10 10:00:06.41548751 +0000 UTC m=+0.122191784 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:00:06 compute-0 sudo[205402]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:06 compute-0 sudo[204075]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:00:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:00:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:06 compute-0 sudo[205502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:06 compute-0 sudo[205502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:06 compute-0 sudo[205502]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:06 compute-0 sudo[205554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:00:06 compute-0 sudo[205554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:06 compute-0 sudo[205699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxwoueamvdxmfxmilhuzbxidnnvrwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090406.5681338-1622-122456056595434/AnsiballZ_stat.py'
Oct 10 10:00:06 compute-0 sudo[205699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:06 compute-0 podman[205652]: 2025-10-10 10:00:06.959348936 +0000 UTC m=+0.117941840 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:07.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:07.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:07 compute-0 python3.9[205709]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:07 compute-0 sudo[205699]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:07 compute-0 sudo[205554]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-0 sudo[205774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:07 compute-0 sudo[205774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:07 compute-0 sudo[205774]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:07 compute-0 sudo[205821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:00:07 compute-0 sudo[205821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:07] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:00:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:07] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:00:07 compute-0 sudo[205909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uowosakjuozcmxhzybyurgolmmzcgypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090406.5681338-1622-122456056595434/AnsiballZ_copy.py'
Oct 10 10:00:07 compute-0 sudo[205909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:07 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:07 compute-0 python3.9[205911]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090406.5681338-1622-122456056595434/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:07 compute-0 sudo[205909]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.862607446 +0000 UTC m=+0.053068529 container create 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:00:07 compute-0 systemd[1]: Started libpod-conmon-9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9.scope.
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.838613077 +0000 UTC m=+0.029074240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.968139253 +0000 UTC m=+0.158600356 container init 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.975135244 +0000 UTC m=+0.165596317 container start 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.978669396 +0000 UTC m=+0.169130479 container attach 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Oct 10 10:00:07 compute-0 exciting_ardinghelli[205992]: 167 167
Oct 10 10:00:07 compute-0 systemd[1]: libpod-9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9.scope: Deactivated successfully.
Oct 10 10:00:07 compute-0 podman[205975]: 2025-10-10 10:00:07.983347503 +0000 UTC m=+0.173808596 container died 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc869ebf93b416bb29b8bac35e157bf912d03d086a4a0f71dd0c43575ba6f3f-merged.mount: Deactivated successfully.
Oct 10 10:00:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:00:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:00:08 compute-0 podman[205975]: 2025-10-10 10:00:08.035993068 +0000 UTC m=+0.226454151 container remove 9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct 10 10:00:08 compute-0 systemd[1]: libpod-conmon-9216abe2c464bfdb814773ff7350799dd1e90ebae09cec26e44f0c7ad70cb1c9.scope: Deactivated successfully.
Oct 10 10:00:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:08.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.266224068 +0000 UTC m=+0.064540382 container create 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:00:08 compute-0 systemd[1]: Started libpod-conmon-4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb.scope.
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.246013608 +0000 UTC m=+0.044329872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:08 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.363502433 +0000 UTC m=+0.161818697 container init 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.374462451 +0000 UTC m=+0.172778705 container start 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.378913561 +0000 UTC m=+0.177229835 container attach 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 10:00:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:00:08 compute-0 sudo[206162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndembcszdmztwuhfisyyepbgpkujujkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090408.2287567-1961-134246673337854/AnsiballZ_command.py'
Oct 10 10:00:08 compute-0 sudo[206162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:08 compute-0 wizardly_cerf[206084]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:00:08 compute-0 wizardly_cerf[206084]: --> All data devices are unavailable
Oct 10 10:00:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:08 compute-0 systemd[1]: libpod-4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb.scope: Deactivated successfully.
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.769740058 +0000 UTC m=+0.568056302 container died 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 10:00:08 compute-0 python3.9[206164]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 10 10:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58e020bf67bef44e79c2332a68a38cd9b840579c849966ec284089095c3c824-merged.mount: Deactivated successfully.
Oct 10 10:00:08 compute-0 podman[206016]: 2025-10-10 10:00:08.82292999 +0000 UTC m=+0.621246244 container remove 4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:00:08 compute-0 systemd[1]: libpod-conmon-4effdd19dbb6956f987222e3e8f842760199c389d4b0512f0844dadd08cadddb.scope: Deactivated successfully.
Oct 10 10:00:08 compute-0 sudo[206162]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:08 compute-0 sudo[205821]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:08 compute-0 sudo[206197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:08 compute-0 sudo[206197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:08 compute-0 sudo[206197]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:09 compute-0 sudo[206240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:00:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:09 compute-0 sudo[206240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.465746225 +0000 UTC m=+0.041768411 container create 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:09 compute-0 systemd[1]: Started libpod-conmon-865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd.scope.
Oct 10 10:00:09 compute-0 sudo[206444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkhdqkmytnbgvxjflhwhljtaraeckwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090409.1644378-1988-119020953881100/AnsiballZ_file.py'
Oct 10 10:00:09 compute-0 sudo[206444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.447458277 +0000 UTC m=+0.023480513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:09 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:09 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.564716774 +0000 UTC m=+0.140739050 container init 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.574399911 +0000 UTC m=+0.150422097 container start 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.578037836 +0000 UTC m=+0.154060082 container attach 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:09 compute-0 nostalgic_tharp[206449]: 167 167
Oct 10 10:00:09 compute-0 systemd[1]: libpod-865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd.scope: Deactivated successfully.
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.582187177 +0000 UTC m=+0.158209383 container died 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28bb7b4ee0c0a067a15c50f73a403802485583ef94f939cf274aa8867d6cdf6-merged.mount: Deactivated successfully.
Oct 10 10:00:09 compute-0 ceph-mon[73551]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:00:09 compute-0 podman[206404]: 2025-10-10 10:00:09.635695829 +0000 UTC m=+0.211718015 container remove 865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 10:00:09 compute-0 systemd[1]: libpod-conmon-865966b94d87c12935e72de101e425cab2c25650b365343296c8172d64e195bd.scope: Deactivated successfully.
Oct 10 10:00:09 compute-0 python3.9[206450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:09 compute-0 sudo[206444]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:09 compute-0 podman[206473]: 2025-10-10 10:00:09.843896432 +0000 UTC m=+0.061204226 container create 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:00:09 compute-0 systemd[1]: Started libpod-conmon-01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131.scope.
Oct 10 10:00:09 compute-0 podman[206473]: 2025-10-10 10:00:09.821593627 +0000 UTC m=+0.038901471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:09 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42238371e47523b63d259ddef3e4dcbb6dd3c2647a31d43281ba008b9670a3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42238371e47523b63d259ddef3e4dcbb6dd3c2647a31d43281ba008b9670a3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42238371e47523b63d259ddef3e4dcbb6dd3c2647a31d43281ba008b9670a3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42238371e47523b63d259ddef3e4dcbb6dd3c2647a31d43281ba008b9670a3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:09 compute-0 podman[206473]: 2025-10-10 10:00:09.959867728 +0000 UTC m=+0.177175562 container init 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:09 compute-0 podman[206473]: 2025-10-10 10:00:09.970480344 +0000 UTC m=+0.187788158 container start 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:09 compute-0 podman[206473]: 2025-10-10 10:00:09.975184313 +0000 UTC m=+0.192492107 container attach 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:00:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:10.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:10 compute-0 sudo[206646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapxvjvbpopeaplxearpdvaaegsdlsuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090409.92714-1988-3137023915559/AnsiballZ_file.py'
Oct 10 10:00:10 compute-0 sudo[206646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:10 compute-0 infallible_cori[206525]: {
Oct 10 10:00:10 compute-0 infallible_cori[206525]:     "0": [
Oct 10 10:00:10 compute-0 infallible_cori[206525]:         {
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "devices": [
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "/dev/loop3"
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             ],
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "lv_name": "ceph_lv0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "lv_size": "21470642176",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "name": "ceph_lv0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "tags": {
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.cluster_name": "ceph",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.crush_device_class": "",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.encrypted": "0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.osd_id": "0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.type": "block",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.vdo": "0",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:                 "ceph.with_tpm": "0"
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             },
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "type": "block",
Oct 10 10:00:10 compute-0 infallible_cori[206525]:             "vg_name": "ceph_vg0"
Oct 10 10:00:10 compute-0 infallible_cori[206525]:         }
Oct 10 10:00:10 compute-0 infallible_cori[206525]:     ]
Oct 10 10:00:10 compute-0 infallible_cori[206525]: }
Oct 10 10:00:10 compute-0 systemd[1]: libpod-01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131.scope: Deactivated successfully.
Oct 10 10:00:10 compute-0 podman[206473]: 2025-10-10 10:00:10.333603026 +0000 UTC m=+0.550910820 container died 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-42238371e47523b63d259ddef3e4dcbb6dd3c2647a31d43281ba008b9670a3da-merged.mount: Deactivated successfully.
Oct 10 10:00:10 compute-0 podman[206473]: 2025-10-10 10:00:10.378673331 +0000 UTC m=+0.595981125 container remove 01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:00:10 compute-0 systemd[1]: libpod-conmon-01089f63692dca2a0c6bb887b8713cf479c4bfe90892d4823fcdada532b09131.scope: Deactivated successfully.
Oct 10 10:00:10 compute-0 sudo[206240]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:10 compute-0 sudo[206661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:10 compute-0 python3.9[206649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:10 compute-0 sudo[206661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:10 compute-0 sudo[206661]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:10 compute-0 sudo[206646]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:10 compute-0 sudo[206686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:00:10 compute-0 sudo[206686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:11 compute-0 sudo[206902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtltupvhvhjnoheqmsdvdgmjgwucepq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090410.6800013-1988-92298184171480/AnsiballZ_file.py'
Oct 10 10:00:11 compute-0 sudo[206902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.032601497 +0000 UTC m=+0.056161127 container create d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:00:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:11 compute-0 systemd[1]: Started libpod-conmon-d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287.scope.
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.004444066 +0000 UTC m=+0.028003776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.130505772 +0000 UTC m=+0.154065422 container init d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:00:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.140503879 +0000 UTC m=+0.164063559 container start d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.145010291 +0000 UTC m=+0.168569951 container attach d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:00:11 compute-0 dreamy_ellis[206919]: 167 167
Oct 10 10:00:11 compute-0 systemd[1]: libpod-d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287.scope: Deactivated successfully.
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.148018746 +0000 UTC m=+0.171578396 container died d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a0fae7ddfc74bcffaa62f96de3f31c67191e3a693ac0c599721643b4876d4b-merged.mount: Deactivated successfully.
Oct 10 10:00:11 compute-0 podman[206897]: 2025-10-10 10:00:11.196624883 +0000 UTC m=+0.220184533 container remove d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:00:11 compute-0 systemd[1]: libpod-conmon-d46250d9e7b2544e11648e6bd73b9dcb29042d621d8b937f9915e1d408607287.scope: Deactivated successfully.
Oct 10 10:00:11 compute-0 python3.9[206913]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:11 compute-0 sudo[206902]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:11 compute-0 podman[206949]: 2025-10-10 10:00:11.381780627 +0000 UTC m=+0.046427089 container create b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 10:00:11 compute-0 systemd[1]: Started libpod-conmon-b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18.scope.
Oct 10 10:00:11 compute-0 podman[206949]: 2025-10-10 10:00:11.36384159 +0000 UTC m=+0.028488042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae46c453094519e3213ffe3507db00a05b399af84a4ec8ddc911708c48a0cbda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae46c453094519e3213ffe3507db00a05b399af84a4ec8ddc911708c48a0cbda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae46c453094519e3213ffe3507db00a05b399af84a4ec8ddc911708c48a0cbda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae46c453094519e3213ffe3507db00a05b399af84a4ec8ddc911708c48a0cbda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:11 compute-0 podman[206949]: 2025-10-10 10:00:11.484915888 +0000 UTC m=+0.149562430 container init b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:00:11 compute-0 podman[206949]: 2025-10-10 10:00:11.497088183 +0000 UTC m=+0.161734635 container start b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:00:11 compute-0 podman[206949]: 2025-10-10 10:00:11.501650018 +0000 UTC m=+0.166296470 container attach b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:00:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:11 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:11 compute-0 ceph-mon[73551]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:11 compute-0 sudo[207113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygcpklrvuhmlfnvxssnpmhnezbzfwhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090411.4288595-1988-158076507603316/AnsiballZ_file.py'
Oct 10 10:00:11 compute-0 sudo[207113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:11 compute-0 python3.9[207121]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:11 compute-0 sudo[207113]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:12.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:12 compute-0 lvm[207284]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:00:12 compute-0 lvm[207284]: VG ceph_vg0 finished
Oct 10 10:00:12 compute-0 boring_brattain[207007]: {}
Oct 10 10:00:12 compute-0 systemd[1]: libpod-b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18.scope: Deactivated successfully.
Oct 10 10:00:12 compute-0 systemd[1]: libpod-b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18.scope: Consumed 1.366s CPU time.
Oct 10 10:00:12 compute-0 podman[206949]: 2025-10-10 10:00:12.335140231 +0000 UTC m=+0.999786693 container died b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 10:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae46c453094519e3213ffe3507db00a05b399af84a4ec8ddc911708c48a0cbda-merged.mount: Deactivated successfully.
Oct 10 10:00:12 compute-0 podman[206949]: 2025-10-10 10:00:12.391232795 +0000 UTC m=+1.055879247 container remove b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:00:12 compute-0 systemd[1]: libpod-conmon-b07edb582a3420dc5277414f4de49b2b52231e0dda13a32c3883533b6bc08f18.scope: Deactivated successfully.
Oct 10 10:00:12 compute-0 sudo[207351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tupmxluqywdxwimzbybxvizfzzfzgpzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090412.1173353-1988-48328244354776/AnsiballZ_file.py'
Oct 10 10:00:12 compute-0 sudo[207351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:12 compute-0 sudo[206686]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:00:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:00:12 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:12 compute-0 sudo[207354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:00:12 compute-0 sudo[207354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:12 compute-0 sudo[207354]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-0 python3.9[207353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:12 compute-0 sudo[207351]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:13 compute-0 sudo[207529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dziumslmvugrhcvgqbbrxlnbpypnnjxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090412.858991-1988-127650063575893/AnsiballZ_file.py'
Oct 10 10:00:13 compute-0 sudo[207529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:13 compute-0 python3.9[207531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:13 compute-0 sudo[207529]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:13 compute-0 ceph-mon[73551]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:13 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:13 compute-0 sudo[207709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rahitpjkkxrcrfbxajagazswtxuestiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090413.5464876-1988-241080695891242/AnsiballZ_file.py'
Oct 10 10:00:13 compute-0 sudo[207709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:13 compute-0 sudo[207671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:13 compute-0 sudo[207671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:13 compute-0 sudo[207671]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:13 compute-0 podman[207655]: 2025-10-10 10:00:13.907665932 +0000 UTC m=+0.108518192 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:00:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:14.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:14 compute-0 python3.9[207719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:14.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:14 compute-0 sudo[207709]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:14 compute-0 sudo[207878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csrlzykdawrsgrkkgcgucmjmsdqguydm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090414.2804255-1988-230067445703052/AnsiballZ_file.py'
Oct 10 10:00:14 compute-0 sudo[207878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:14 compute-0 python3.9[207880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:14 compute-0 sudo[207878]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:15 compute-0 sudo[208031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsrxdjmsvsmdvyjfoyjepxbzophsises ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090414.95116-1988-64366963246834/AnsiballZ_file.py'
Oct 10 10:00:15 compute-0 sudo[208031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:15 compute-0 python3.9[208033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:15 compute-0 sudo[208031]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:15 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:15 compute-0 ceph-mon[73551]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:16.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:16 compute-0 sudo[208184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvimzejnygkxhrzsqhjacwrhxudmsypb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090415.6378-1988-202503453026195/AnsiballZ_file.py'
Oct 10 10:00:16 compute-0 sudo[208184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:00:16
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', '.nfs', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:00:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:00:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:16 compute-0 python3.9[208186]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:16 compute-0 sudo[208184]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:00:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:16 compute-0 sudo[208337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cizwxmzkoqobifxcqrpkpyzjdksezewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090416.6105762-1988-76047441262427/AnsiballZ_file.py'
Oct 10 10:00:16 compute-0 sudo[208337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:17 compute-0 python3.9[208339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:17 compute-0 sudo[208337]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:17 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:17 compute-0 sudo[208489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzhobodnxnucpjjjgoxdxmbxcgmufpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090417.2992113-1988-264275409635513/AnsiballZ_file.py'
Oct 10 10:00:17 compute-0 sudo[208489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:17 compute-0 ceph-mon[73551]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:17 compute-0 python3.9[208491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:17 compute-0 sudo[208489]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:18 compute-0 sudo[208642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwydjeuvaxfkopzkweuayardrqqiefln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090417.9576592-1988-60618667653236/AnsiballZ_file.py'
Oct 10 10:00:18 compute-0 sudo[208642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:18 compute-0 python3.9[208644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:18 compute-0 sudo[208642]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:00:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:18 compute-0 sudo[208795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igomilgzythubzkgfvffkwgmzjdkwilb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090418.6765573-1988-248407452073795/AnsiballZ_file.py'
Oct 10 10:00:18 compute-0 sudo[208795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:19 compute-0 python3.9[208797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:19 compute-0 sudo[208795]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:19 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:19 compute-0 ceph-mon[73551]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:00:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:20 compute-0 sudo[208948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzskjlbngbnxugznhiybdjbitjkoycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090419.7772512-2285-209724930270303/AnsiballZ_stat.py'
Oct 10 10:00:20 compute-0 sudo[208948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:20 compute-0 python3.9[208950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:20 compute-0 sudo[208948]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100020 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:00:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:20 compute-0 sudo[209071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypnoauqinawxgazwnipfieajnmlejoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090419.7772512-2285-209724930270303/AnsiballZ_copy.py'
Oct 10 10:00:20 compute-0 sudo[209071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:20 compute-0 python3.9[209073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090419.7772512-2285-209724930270303/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:20 compute-0 sudo[209071]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58002dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:21 compute-0 sudo[209224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxmggyeaiztmnakzlzcazzdqkmvbqeob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090421.0762544-2285-120711137373949/AnsiballZ_stat.py'
Oct 10 10:00:21 compute-0 sudo[209224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:21 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:21 compute-0 python3.9[209226]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:21 compute-0 sudo[209224]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:21 compute-0 ceph-mon[73551]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:22 compute-0 sudo[209347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byiujnbjlqhkdngmryedwbvlxycppjkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090421.0762544-2285-120711137373949/AnsiballZ_copy.py'
Oct 10 10:00:22 compute-0 sudo[209347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:22.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:22 compute-0 python3.9[209349]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090421.0762544-2285-120711137373949/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:22 compute-0 sudo[209347]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:22 compute-0 sudo[209500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsnhajtvdidlsngyqxltjjujluksccm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090422.402396-2285-115576768406023/AnsiballZ_stat.py'
Oct 10 10:00:22 compute-0 sudo[209500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:22 compute-0 python3.9[209502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:22 compute-0 sudo[209500]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58002dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:23 compute-0 sudo[209624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzefxwgqnphxyatzpltvsjaskixjlon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090422.402396-2285-115576768406023/AnsiballZ_copy.py'
Oct 10 10:00:23 compute-0 sudo[209624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:23 compute-0 python3.9[209626]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090422.402396-2285-115576768406023/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:23 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:23 compute-0 sudo[209624]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:23 compute-0 ceph-mon[73551]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:24.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:24 compute-0 sudo[209777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhuylulnyzecwltnbpadwwfanyohgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090423.7472935-2285-266757665475558/AnsiballZ_stat.py'
Oct 10 10:00:24 compute-0 sudo[209777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:24.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:24 compute-0 python3.9[209779]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:24 compute-0 sudo[209777]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:24 compute-0 sudo[209900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jceoadopegoblydynsqlqqglqushokzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090423.7472935-2285-266757665475558/AnsiballZ_copy.py'
Oct 10 10:00:24 compute-0 sudo[209900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:24 compute-0 python3.9[209902]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090423.7472935-2285-266757665475558/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:24 compute-0 sudo[209900]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:25 compute-0 sudo[210053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btcupbdqlanmajlxhusktvhwgpsehfkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090425.0367389-2285-233474731184013/AnsiballZ_stat.py'
Oct 10 10:00:25 compute-0 sudo[210053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:25 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:25 compute-0 python3.9[210055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:25 compute-0 sudo[210053]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:25 compute-0 ceph-mon[73551]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:25 compute-0 sudo[210176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvovedaewykrhlwcgkqsedrrpnyyyyji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090425.0367389-2285-233474731184013/AnsiballZ_copy.py'
Oct 10 10:00:25 compute-0 sudo[210176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:26.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:26.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:26 compute-0 python3.9[210178]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090425.0367389-2285-233474731184013/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:26 compute-0 sudo[210176]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:26 compute-0 sudo[210329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftkxsuvcwghpnodokjckxulixobqbet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090426.3436835-2285-201575661471418/AnsiballZ_stat.py'
Oct 10 10:00:26 compute-0 sudo[210329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:26 compute-0 python3.9[210331]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:26 compute-0 sudo[210329]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:27.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:27 compute-0 sudo[210453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihvsqmdpetrcyfumyamorhjqwrrxlwnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090426.3436835-2285-201575661471418/AnsiballZ_copy.py'
Oct 10 10:00:27 compute-0 sudo[210453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:27 compute-0 python3.9[210455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090426.3436835-2285-201575661471418/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:27 compute-0 sudo[210453]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:27 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:27 compute-0 ceph-mon[73551]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:27 compute-0 sudo[210605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzsqfnbhlzrauqyjjurrjdqwrgpgtzrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090427.6349897-2285-234474472763290/AnsiballZ_stat.py'
Oct 10 10:00:27 compute-0 sudo[210605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:28.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:28.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:28 compute-0 python3.9[210607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:28 compute-0 sudo[210605]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:28 compute-0 sudo[210729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llaeozuvxiaapaoowmtthdwposaqtiur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090427.6349897-2285-234474472763290/AnsiballZ_copy.py'
Oct 10 10:00:28 compute-0 sudo[210729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:28 compute-0 python3.9[210731]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090427.6349897-2285-234474472763290/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:28 compute-0 sudo[210729]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:29 compute-0 sudo[210882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cncvjiyyspofeucanctknhciykkgvzuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090428.9030275-2285-241918778849367/AnsiballZ_stat.py'
Oct 10 10:00:29 compute-0 sudo[210882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:29 compute-0 python3.9[210884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:29 compute-0 sudo[210882]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:29 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:00:29 compute-0 ceph-mon[73551]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:29 compute-0 sudo[211005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtgmfvwdrvyqxqztfxlxumxvhttmold ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090428.9030275-2285-241918778849367/AnsiballZ_copy.py'
Oct 10 10:00:29 compute-0 sudo[211005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:30 compute-0 python3.9[211007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090428.9030275-2285-241918778849367/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:30 compute-0 sudo[211005]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:30.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:30 compute-0 sudo[211158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leafvobnzvscsxsqqeuyivvvuoxwobky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090430.2710674-2285-68313381646844/AnsiballZ_stat.py'
Oct 10 10:00:30 compute-0 sudo[211158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:30 compute-0 python3.9[211160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:30 compute-0 sudo[211158]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:31 compute-0 sudo[211282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzxwicortrpbfvfvhkkvycleppfcigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090430.2710674-2285-68313381646844/AnsiballZ_copy.py'
Oct 10 10:00:31 compute-0 sudo[211282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:00:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:31 compute-0 python3.9[211284]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090430.2710674-2285-68313381646844/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:31 compute-0 sudo[211282]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:31 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:31 compute-0 ceph-mon[73551]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:31 compute-0 sudo[211434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzijkxpkcpwjnckiqxgyxoxkrzkpiayk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090431.503807-2285-204206842393729/AnsiballZ_stat.py'
Oct 10 10:00:31 compute-0 sudo[211434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:31 compute-0 python3.9[211436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:32 compute-0 sudo[211434]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100032 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:00:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:32.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:32 compute-0 sudo[211558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaykcscxrzfwquamjjfukgzvjpaxukza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090431.503807-2285-204206842393729/AnsiballZ_copy.py'
Oct 10 10:00:32 compute-0 sudo[211558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:32 compute-0 python3.9[211560]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090431.503807-2285-204206842393729/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:00:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:32 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:32 compute-0 sudo[211558]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:32 compute-0 sudo[211711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xauwpjvifpmrqqzdllfaayltbwhphedq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090432.7268407-2285-186496133328284/AnsiballZ_stat.py'
Oct 10 10:00:33 compute-0 sudo[211711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:33 compute-0 python3.9[211713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:33 compute-0 sudo[211711]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:33 compute-0 sudo[211834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvkrbljjmwmywukmjulcuulibimeshl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090432.7268407-2285-186496133328284/AnsiballZ_copy.py'
Oct 10 10:00:33 compute-0 sudo[211834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:33 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:33 compute-0 python3.9[211836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090432.7268407-2285-186496133328284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:33 compute-0 ceph-mon[73551]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:33 compute-0 sudo[211834]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:33 compute-0 sudo[211864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:33 compute-0 sudo[211864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:33 compute-0 sudo[211864]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:34 compute-0 sudo[212012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhrhidzqbclxsrbqbvugjevvazlenhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090433.9404974-2285-33378976658918/AnsiballZ_stat.py'
Oct 10 10:00:34 compute-0 sudo[212012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:34 compute-0 python3.9[212014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:34 compute-0 sudo[212012]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:34 compute-0 sudo[212136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slyygbnypqtcxzdezszpscpqtcnuwzuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090433.9404974-2285-33378976658918/AnsiballZ_copy.py'
Oct 10 10:00:34 compute-0 sudo[212136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:35 compute-0 python3.9[212138]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090433.9404974-2285-33378976658918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:35 compute-0 sudo[212136]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:35 compute-0 sudo[212288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuthqxvgamrlzizblektssgofdhlbtpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090435.2162516-2285-19372968764867/AnsiballZ_stat.py'
Oct 10 10:00:35 compute-0 sudo[212288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:35 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:35 compute-0 python3.9[212290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:35 compute-0 sudo[212288]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:35 compute-0 ceph-mon[73551]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:36.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:36 compute-0 sudo[212412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sreynigbagsqiuwntrqcmjlkkwptdjqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090435.2162516-2285-19372968764867/AnsiballZ_copy.py'
Oct 10 10:00:36 compute-0 sudo[212412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:36 compute-0 python3.9[212414]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090435.2162516-2285-19372968764867/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:36 compute-0 sudo[212412]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:36 compute-0 sudo[212565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vakozlxcbxedhsgmjtnbirfjkesvxjrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090436.5413375-2285-100135892657339/AnsiballZ_stat.py'
Oct 10 10:00:36 compute-0 sudo[212565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:37.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:37.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:37.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:37 compute-0 python3.9[212567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:37 compute-0 sudo[212565]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e6c004eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:37 compute-0 podman[212568]: 2025-10-10 10:00:37.28366492 +0000 UTC m=+0.116822465 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:00:37 compute-0 sudo[212714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntzhmwkvhbvawhcekbnwwliupiwvxycv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090436.5413375-2285-100135892657339/AnsiballZ_copy.py'
Oct 10 10:00:37 compute-0 sudo[212714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:37 compute-0 python3.9[212716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090436.5413375-2285-100135892657339/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:37 compute-0 sudo[212714]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:37 compute-0 ceph-mon[73551]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:37 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:38.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:00:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:39 compute-0 python3.9[212868]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:00:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:39 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:39 compute-0 ceph-mon[73551]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:00:39 compute-0 sudo[213022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grfvdgjroxeostkfkitzwhxgawmawsxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090439.4433117-2903-106621904666777/AnsiballZ_seboolean.py'
Oct 10 10:00:39 compute-0 sudo[213022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:40 compute-0 python3.9[213024]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 10 10:00:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 10 10:00:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:00:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:40 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:00:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:41 compute-0 sudo[213022]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:41 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:41 compute-0 sudo[213181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvttbjakeakhcpoeeeczrcnfaubidmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090441.5337632-2927-223931372749780/AnsiballZ_copy.py'
Oct 10 10:00:41 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 10 10:00:41 compute-0 sudo[213181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:00:41.880 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:00:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:00:41.881 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:00:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:00:41.881 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:00:41 compute-0 ceph-mon[73551]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 10 10:00:41 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 10 10:00:41 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:41.919889) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:00:41 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 10 10:00:41 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441920007, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4199, "num_deletes": 502, "total_data_size": 8611328, "memory_usage": 8773824, "flush_reason": "Manual Compaction"}
Oct 10 10:00:41 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442001411, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8357485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13171, "largest_seqno": 17369, "table_properties": {"data_size": 8339729, "index_size": 12010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8303208, "raw_average_value_size": 4480, "num_data_blocks": 525, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089994, "oldest_key_time": 1760089994, "file_creation_time": 1760090441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 81672 microseconds, and 23264 cpu microseconds.
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.001582) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8357485 bytes OK
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.001641) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.003482) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.003502) EVENT_LOG_v1 {"time_micros": 1760090442003496, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.003527) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8594561, prev total WAL file size 8594561, number of live WAL files 2.
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.006038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8161KB)], [32(12MB)]
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442006135, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 21058861, "oldest_snapshot_seqno": -1}
Oct 10 10:00:42 compute-0 python3.9[213183]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:42 compute-0 sudo[213181]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5070 keys, 15513996 bytes, temperature: kUnknown
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442156154, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15513996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15475427, "index_size": 24763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126815, "raw_average_key_size": 25, "raw_value_size": 15378911, "raw_average_value_size": 3033, "num_data_blocks": 1042, "num_entries": 5070, "num_filter_entries": 5070, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090442, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.156729) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15513996 bytes
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.158247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.1 rd, 103.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.1 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6092, records dropped: 1022 output_compression: NoCompression
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.158288) EVENT_LOG_v1 {"time_micros": 1760090442158269, "job": 14, "event": "compaction_finished", "compaction_time_micros": 150362, "compaction_time_cpu_micros": 52421, "output_level": 6, "num_output_files": 1, "total_output_size": 15513996, "num_input_records": 6092, "num_output_records": 5070, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442162092, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 10 10:00:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:42.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442165071, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.005916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.165115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.165120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.165122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.165123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:00:42.165125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-0 sudo[213334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prvjrlbkpelbwzvyucjypcooqaawtxtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090442.2236226-2927-207733505204005/AnsiballZ_copy.py'
Oct 10 10:00:42 compute-0 sudo[213334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:00:42 compute-0 python3.9[213336]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:42 compute-0 sudo[213334]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:43 compute-0 sudo[213487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbrasxgdvkglvksumrzqjqjwvrwjkxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090442.9662614-2927-27477973025415/AnsiballZ_copy.py'
Oct 10 10:00:43 compute-0 sudo[213487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:43 compute-0 python3.9[213489]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:43 compute-0 sudo[213487]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:43 compute-0 sudo[213639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkckngrnbcdfyokilaudnorwejutwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090443.626468-2927-75737778529425/AnsiballZ_copy.py'
Oct 10 10:00:43 compute-0 sudo[213639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:43 compute-0 ceph-mon[73551]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:00:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:00:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:43 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:44 compute-0 python3.9[213641]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:44 compute-0 sudo[213639]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:44 compute-0 podman[213643]: 2025-10-10 10:00:44.22988651 +0000 UTC m=+0.062833787 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:00:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:44 compute-0 sudo[213810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nphbhbrjnysmfdmrjedzcohtsiaihbie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090444.2948859-2927-72699887974708/AnsiballZ_copy.py'
Oct 10 10:00:44 compute-0 sudo[213810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:44 compute-0 python3.9[213812]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:44 compute-0 sudo[213810]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:45 compute-0 sudo[213963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxdnwaupmcsmuxqmssqqksixowehjnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090445.231407-3035-161340120487727/AnsiballZ_copy.py'
Oct 10 10:00:45 compute-0 sudo[213963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:45 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:45 compute-0 python3.9[213965]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:45 compute-0 sudo[213963]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:45 compute-0 ceph-mon[73551]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:46.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:46.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:46 compute-0 sudo[214116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqngeemnblvgpnpzquqfhjrpgvxaxlqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090445.9071064-3035-148537952987775/AnsiballZ_copy.py'
Oct 10 10:00:46 compute-0 sudo[214116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:00:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:46 compute-0 python3.9[214118]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:46 compute-0 sudo[214116]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:00:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100046 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:00:46 compute-0 sudo[214269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewbhyxmzvftpeydhonixmvzgimraxqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090446.5283864-3035-162589334620395/AnsiballZ_copy.py'
Oct 10 10:00:46 compute-0 sudo[214269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:46 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:00:46 compute-0 python3.9[214271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:47 compute-0 sudo[214269]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:47.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:47.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002890 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002890 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:47] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Oct 10 10:00:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:47] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Oct 10 10:00:47 compute-0 sudo[214421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvcqdvjemupnsjwxgjisavwvthufyxpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090447.1573057-3035-267287754082035/AnsiballZ_copy.py'
Oct 10 10:00:47 compute-0 sudo[214421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:47 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:47 compute-0 python3.9[214423]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:47 compute-0 sudo[214421]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:47 compute-0 ceph-mon[73551]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:48 compute-0 sudo[214574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cafsfkeworrwgutvcejyvhbixhvhpeai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090447.8585253-3035-39822129041375/AnsiballZ_copy.py'
Oct 10 10:00:48 compute-0 sudo[214574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:48.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:48 compute-0 python3.9[214576]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:48 compute-0 sudo[214574]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:00:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:49 compute-0 sudo[214727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piguymtlkgyvgypbtutftpoemfotopxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090449.0144045-3143-18351659183434/AnsiballZ_systemd.py'
Oct 10 10:00:49 compute-0 sudo[214727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:49 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500028b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:49 compute-0 python3.9[214729]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:49 compute-0 systemd[1]: Reloading.
Oct 10 10:00:49 compute-0 systemd-rc-local-generator[214755]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:49 compute-0 systemd-sysv-generator[214758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:50 compute-0 ceph-mon[73551]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:00:50 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 10 10:00:50 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 10 10:00:50 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 10 10:00:50 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 10 10:00:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:50.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:50 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 10 10:00:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:50.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:50 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 10 10:00:50 compute-0 sudo[214727]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:50 compute-0 sudo[214922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxcnxjnyaqlybquulogsgygshfikcajt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090450.400431-3143-199124606510466/AnsiballZ_systemd.py'
Oct 10 10:00:50 compute-0 sudo[214922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:51 compute-0 python3.9[214924]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:51 compute-0 systemd[1]: Reloading.
Oct 10 10:00:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:51 compute-0 systemd-rc-local-generator[214951]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:51 compute-0 systemd-sysv-generator[214955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:51 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 10 10:00:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 10 10:00:51 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 10 10:00:51 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 10 10:00:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 10 10:00:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 10 10:00:51 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 10:00:51 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 10 10:00:51 compute-0 sudo[214922]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:51 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:52 compute-0 sudo[215138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjynimudgkpwujvjzjujnikgzegferf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090451.709362-3143-86761162874436/AnsiballZ_systemd.py'
Oct 10 10:00:52 compute-0 sudo[215138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:52 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 10 10:00:52 compute-0 ceph-mon[73551]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100052 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:00:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:52.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:52 compute-0 python3.9[215140]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:52 compute-0 systemd[1]: Reloading.
Oct 10 10:00:52 compute-0 systemd-rc-local-generator[215168]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:52 compute-0 systemd-sysv-generator[215172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:52 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 10 10:00:52 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 10 10:00:52 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 10 10:00:52 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 10 10:00:52 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 10 10:00:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 10 10:00:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 10 10:00:52 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 10 10:00:52 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 10 10:00:52 compute-0 sudo[215138]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500028d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:53 compute-0 sudo[215359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ideretjspnmdehcxzugthtkmframpwuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090452.9799788-3143-66186182901158/AnsiballZ_systemd.py'
Oct 10 10:00:53 compute-0 sudo[215359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:53 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:53 compute-0 python3.9[215361]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:53 compute-0 systemd[1]: Reloading.
Oct 10 10:00:53 compute-0 systemd-sysv-generator[215392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:53 compute-0 systemd-rc-local-generator[215388]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:53 compute-0 setroubleshoot[215141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2cd23b0d-3434-4237-b043-5637c7cbed28
Oct 10 10:00:53 compute-0 setroubleshoot[215141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 10 10:00:53 compute-0 setroubleshoot[215141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2cd23b0d-3434-4237-b043-5637c7cbed28
Oct 10 10:00:53 compute-0 setroubleshoot[215141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 10 10:00:54 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 10 10:00:54 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 10 10:00:54 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 10 10:00:54 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 10 10:00:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 10 10:00:54 compute-0 ceph-mon[73551]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:54 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 10 10:00:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:54.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:54 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 10 10:00:54 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 10 10:00:54 compute-0 sudo[215397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 10 10:00:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 10 10:00:54 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 10:00:54 compute-0 sudo[215397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:54 compute-0 sudo[215397]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:54 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 10 10:00:54 compute-0 sudo[215359]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:54.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:54 compute-0 sudo[215598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmgytinolzoktzuovqztlnsgvdnqxhaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090454.3492038-3143-84513319590965/AnsiballZ_systemd.py'
Oct 10 10:00:54 compute-0 sudo[215598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:54 compute-0 python3.9[215600]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:54 compute-0 systemd[1]: Reloading.
Oct 10 10:00:55 compute-0 systemd-rc-local-generator[215629]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:55 compute-0 systemd-sysv-generator[215633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e500028f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:55 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 10 10:00:55 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 10 10:00:55 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 10 10:00:55 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 10 10:00:55 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 10 10:00:55 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 10 10:00:55 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:00:55 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 10 10:00:55 compute-0 sudo[215598]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:55 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:56 compute-0 ceph-mon[73551]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:56 compute-0 sudo[215809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tacioftitjqctpmpqcvxaslmgctubnva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090455.7919867-3254-25014191022838/AnsiballZ_file.py'
Oct 10 10:00:56 compute-0 sudo[215809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:56 compute-0 python3.9[215811]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:56 compute-0 sudo[215809]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:56 compute-0 sudo[215962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-comciqajdkewgvpphmqxvpwsahlgxddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090456.5464141-3278-79482176033707/AnsiballZ_find.py'
Oct 10 10:00:56 compute-0 sudo[215962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:57.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:57.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:00:57.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:57 compute-0 ceph-mon[73551]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:57 compute-0 python3.9[215964]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:00:57 compute-0 sudo[215962]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:57] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Oct 10 10:00:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:00:57] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Oct 10 10:00:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:57 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:57 compute-0 sudo[216114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsajffkvzyioakbrrykldzqilepkiehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090457.5451279-3302-53986076482274/AnsiballZ_command.py'
Oct 10 10:00:57 compute-0 sudo[216114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:58 compute-0 python3.9[216116]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:00:58 compute-0 sudo[216114]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:00:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:00:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:58.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:00:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:58 compute-0 python3.9[216271]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:00:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:00:59 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:59 compute-0 ceph-mon[73551]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:59 compute-0 python3.9[216422]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:00.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:00 compute-0 python3.9[216544]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090459.4785645-3359-10253964225146/.source.xml follow=False _original_basename=secret.xml.j2 checksum=baa25a2f67c100fe0cd0e069ccc25ef935446dd6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:01:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e880056e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:01:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e50002930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:01:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:01:01 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c0048f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:01 compute-0 sudo[216695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpzjdfqbiwqzshsdmaavosyjdhxxbxql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090461.2786357-3404-78027671846625/AnsiballZ_command.py'
Oct 10 10:01:01 compute-0 sudo[216695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:01 compute-0 ceph-mon[73551]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:01 compute-0 CROND[216699]: (root) CMD (run-parts /etc/cron.hourly)
Oct 10 10:01:01 compute-0 run-parts[216702]: (/etc/cron.hourly) starting 0anacron
Oct 10 10:01:01 compute-0 anacron[216710]: Anacron started on 2025-10-10
Oct 10 10:01:01 compute-0 anacron[216710]: Will run job `cron.daily' in 10 min.
Oct 10 10:01:01 compute-0 anacron[216710]: Will run job `cron.weekly' in 30 min.
Oct 10 10:01:01 compute-0 anacron[216710]: Will run job `cron.monthly' in 50 min.
Oct 10 10:01:01 compute-0 anacron[216710]: Jobs will be executed sequentially
Oct 10 10:01:01 compute-0 run-parts[216712]: (/etc/cron.hourly) finished 0anacron
Oct 10 10:01:01 compute-0 CROND[216698]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 10 10:01:01 compute-0 python3.9[216697]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 21f084a3-af34-5230-afe4-ea5cd24a55f4
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:01 compute-0 polkitd[6931]: Registered Authentication Agent for unix-process:216714:336547 (system bus name :1.2996 [/usr/bin/pkttyagent --process 216714 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:01 compute-0 polkitd[6931]: Unregistered Authentication Agent for unix-process:216714:336547 (system bus name :1.2996, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:01 compute-0 polkitd[6931]: Registered Authentication Agent for unix-process:216713:336547 (system bus name :1.2997 [/usr/bin/pkttyagent --process 216713 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:01 compute-0 polkitd[6931]: Unregistered Authentication Agent for unix-process:216713:336547 (system bus name :1.2997, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:01 compute-0 sudo[216695]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:02.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:02 compute-0 python3.9[216875]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:01:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004890 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[133317]: 10/10/2025 10:01:03 : epoch 68e8d788 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e70004890 fd 39 proxy ignored for local
Oct 10 10:01:03 compute-0 kernel: ganesha.nfsd[213031]: segfault at 50 ip 00007f6f3bf4e32e sp 00007f6f05ffa210 error 4 in libntirpc.so.5.8[7f6f3bf33000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 10:01:03 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:01:03 compute-0 systemd[1]: Started Process Core Dump (PID 216957/UID 0).
Oct 10 10:01:03 compute-0 sudo[217028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onrhzdxuyxpbidngoocoewpwpbtfhbnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090463.0887904-3452-150263275363294/AnsiballZ_command.py'
Oct 10 10:01:03 compute-0 sudo[217028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:03 compute-0 sudo[217028]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:03 compute-0 ceph-mon[73551]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:03 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 10 10:01:03 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.034s CPU time.
Oct 10 10:01:03 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 10 10:01:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100104 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:04.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:04 compute-0 sudo[217182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkbxedzlijzcanxrgvmflxfsvmvlfua ; FSID=21f084a3-af34-5230-afe4-ea5cd24a55f4 KEY=AQAP1ehoAAAAABAAt8v7pISuvMofUPTRybMptA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090463.9293997-3476-202561375655078/AnsiballZ_command.py'
Oct 10 10:01:04 compute-0 sudo[217182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:04 compute-0 polkitd[6931]: Registered Authentication Agent for unix-process:217185:336806 (system bus name :1.3000 [/usr/bin/pkttyagent --process 217185 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:04 compute-0 polkitd[6931]: Unregistered Authentication Agent for unix-process:217185:336806 (system bus name :1.3000, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:04 compute-0 sudo[217182]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:04 compute-0 systemd-coredump[216975]: Process 133341 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 87:
                                                    #0  0x00007f6f3bf4e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:01:05 compute-0 systemd[1]: systemd-coredump@3-216957-0.service: Deactivated successfully.
Oct 10 10:01:05 compute-0 systemd[1]: systemd-coredump@3-216957-0.service: Consumed 1.789s CPU time.
Oct 10 10:01:05 compute-0 podman[217220]: 2025-10-10 10:01:05.142698486 +0000 UTC m=+0.035107905 container died c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 10:01:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3273013b7c2f22df3bb08013777873058a29fa65dc57843084d85daaff9dac81-merged.mount: Deactivated successfully.
Oct 10 10:01:05 compute-0 podman[217220]: 2025-10-10 10:01:05.19460964 +0000 UTC m=+0.087019069 container remove c9c6859a1efbd284669acbdea4fba9946830792b5e15fa3556da759061f3e77c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:01:05 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:01:05 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:01:05 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 2.509s CPU time.
Oct 10 10:01:05 compute-0 ceph-mon[73551]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:06 compute-0 sudo[217389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhgfaakuoygphknkrecikvfbaosrxube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090465.9218516-3500-125343566863127/AnsiballZ_copy.py'
Oct 10 10:01:06 compute-0 sudo[217389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:06 compute-0 python3.9[217391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:06 compute-0 sudo[217389]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:07 compute-0 sudo[217542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwaohfvdnbwhijicxfvbuhyqcaqhmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090466.7096388-3524-125264753918013/AnsiballZ_stat.py'
Oct 10 10:01:07 compute-0 sudo[217542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:07.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:01:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:07.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:01:07 compute-0 python3.9[217544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:07 compute-0 sudo[217542]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:07] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:01:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:07] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:01:07 compute-0 sudo[217676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhvbxfxylimzypzizaxtozevbzhjkdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090466.7096388-3524-125264753918013/AnsiballZ_copy.py'
Oct 10 10:01:07 compute-0 sudo[217676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:07 compute-0 podman[217639]: 2025-10-10 10:01:07.66244395 +0000 UTC m=+0.097410265 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 10 10:01:07 compute-0 ceph-mon[73551]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:07 compute-0 python3.9[217684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090466.7096388-3524-125264753918013/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:07 compute-0 sudo[217676]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:08 compute-0 sudo[217842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqgfjabiiyxadnsgirscuiwbmobrkkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090468.3225129-3572-237751179231366/AnsiballZ_file.py'
Oct 10 10:01:08 compute-0 sudo[217842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:08 compute-0 python3.9[217844]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:08 compute-0 sudo[217842]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100109 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:09 compute-0 sudo[217995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnfpdxnurqatcnuansnabiewnelhebf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090469.2333767-3596-84249002677468/AnsiballZ_stat.py'
Oct 10 10:01:09 compute-0 sudo[217995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:09 compute-0 ceph-mon[73551]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:09 compute-0 python3.9[217997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:09 compute-0 sudo[217995]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:10 compute-0 sudo[218074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfjdjgwtiqzvtdwmaxwqyvkheucmctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090469.2333767-3596-84249002677468/AnsiballZ_file.py'
Oct 10 10:01:10 compute-0 sudo[218074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:10 compute-0 python3.9[218076]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:10 compute-0 sudo[218074]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:01:10 compute-0 sudo[218227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avmbvcpjwinzlqwxxwyvevhdreszqnww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090470.613819-3632-278051845993748/AnsiballZ_stat.py'
Oct 10 10:01:10 compute-0 sudo[218227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:11 compute-0 python3.9[218229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:11 compute-0 sudo[218227]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:11 compute-0 sudo[218305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrsrjfqmxtxdnecrflzajcyuiioxjpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090470.613819-3632-278051845993748/AnsiballZ_file.py'
Oct 10 10:01:11 compute-0 sudo[218305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:11 compute-0 python3.9[218307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5lrcxd2n recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:11 compute-0 sudo[218305]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:11 compute-0 ceph-mon[73551]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:01:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:12.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:12 compute-0 sudo[218458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfcpdrfihkwgsvxlnnphdtxbcwcvfhjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090471.871838-3668-13807246922531/AnsiballZ_stat.py'
Oct 10 10:01:12 compute-0 sudo[218458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:12 compute-0 python3.9[218460]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:12 compute-0 sudo[218458]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:12 compute-0 sudo[218536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjukvvgqpkpyoqyzbydknxxikeujkhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090471.871838-3668-13807246922531/AnsiballZ_file.py'
Oct 10 10:01:12 compute-0 sudo[218536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:12 compute-0 sudo[218540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:01:12 compute-0 sudo[218540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:12 compute-0 sudo[218540]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:12 compute-0 sudo[218565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:01:12 compute-0 sudo[218565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:12 compute-0 python3.9[218538]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:12 compute-0 sudo[218536]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:13 compute-0 sudo[218565]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-0 sudo[218745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:01:13 compute-0 sudo[218745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:13 compute-0 sudo[218745]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:13 compute-0 sudo[218794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdezmwbfjfidvpfperdmfjhqlnozxrzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090473.3740656-3707-226099836728831/AnsiballZ_command.py'
Oct 10 10:01:13 compute-0 sudo[218794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:13 compute-0 sudo[218799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:01:13 compute-0 sudo[218799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:13 compute-0 ceph-mon[73551]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:13 compute-0 python3.9[218798]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:13 compute-0 sudo[218794]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:14 compute-0 sudo[218885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:14 compute-0 sudo[218885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:14 compute-0 sudo[218885]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.234939085 +0000 UTC m=+0.049130896 container create b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:01:14 compute-0 systemd[1]: Started libpod-conmon-b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389.scope.
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.211581881 +0000 UTC m=+0.025773722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:14 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.331614878 +0000 UTC m=+0.145806699 container init b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.341817908 +0000 UTC m=+0.156009709 container start b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.346076993 +0000 UTC m=+0.160268794 container attach b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:01:14 compute-0 exciting_davinci[218980]: 167 167
Oct 10 10:01:14 compute-0 systemd[1]: libpod-b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389.scope: Deactivated successfully.
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.350381438 +0000 UTC m=+0.164573249 container died b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-68f52a5dbba7910d1ee096e55d649d1f812d2359aa0a684798821852c00af9d5-merged.mount: Deactivated successfully.
Oct 10 10:01:14 compute-0 podman[218919]: 2025-10-10 10:01:14.393444203 +0000 UTC m=+0.207636004 container remove b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:01:14 compute-0 podman[218962]: 2025-10-10 10:01:14.400922668 +0000 UTC m=+0.117975743 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:01:14 compute-0 systemd[1]: libpod-conmon-b3326be73dbeeb5f5f5f49e8968eb3d0c3e501035bff2a7c0ce2fd4dd18c5389.scope: Deactivated successfully.
Oct 10 10:01:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:14 compute-0 podman[219028]: 2025-10-10 10:01:14.583796503 +0000 UTC m=+0.055523718 container create ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:14 compute-0 systemd[1]: Started libpod-conmon-ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206.scope.
Oct 10 10:01:14 compute-0 podman[219028]: 2025-10-10 10:01:14.56114889 +0000 UTC m=+0.032876135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:14 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:14 compute-0 podman[219028]: 2025-10-10 10:01:14.678304736 +0000 UTC m=+0.150031971 container init ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:01:14 compute-0 podman[219028]: 2025-10-10 10:01:14.701165666 +0000 UTC m=+0.172892891 container start ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 10:01:14 compute-0 podman[219028]: 2025-10-10 10:01:14.705370557 +0000 UTC m=+0.177097782 container attach ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:01:14 compute-0 sudo[219122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgliitzltqvmvidenshwwvofltcmlak ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090474.2157462-3731-194278521448408/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 10:01:14 compute-0 sudo[219122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:14 compute-0 python3[219125]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 10:01:15 compute-0 sudo[219122]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-0 reverent_snyder[219084]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:01:15 compute-0 reverent_snyder[219084]: --> All data devices are unavailable
Oct 10 10:01:15 compute-0 systemd[1]: libpod-ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206.scope: Deactivated successfully.
Oct 10 10:01:15 compute-0 podman[219028]: 2025-10-10 10:01:15.11316928 +0000 UTC m=+0.584896555 container died ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-51cb5c0e857edd715e7a3748b0ce9e5ae44f2c177d6e24acab9ce37c10ad3bd7-merged.mount: Deactivated successfully.
Oct 10 10:01:15 compute-0 podman[219028]: 2025-10-10 10:01:15.176611025 +0000 UTC m=+0.648338250 container remove ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:01:15 compute-0 systemd[1]: libpod-conmon-ee40e21a24975aacdedcd03a4e8c84b855f11b556d8f7b8f22faaf305da45206.scope: Deactivated successfully.
Oct 10 10:01:15 compute-0 sudo[218799]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-0 sudo[219207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:01:15 compute-0 sudo[219207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:15 compute-0 sudo[219207]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-0 sudo[219249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:01:15 compute-0 sudo[219249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:15 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 4.
Oct 10 10:01:15 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:15 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 2.509s CPU time.
Oct 10 10:01:15 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:01:15 compute-0 sudo[219358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtuvwwlsdpiskyosqwfqauzxhkocxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090475.1987298-3755-140524919456683/AnsiballZ_stat.py'
Oct 10 10:01:15 compute-0 sudo[219358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:15 compute-0 podman[219417]: 2025-10-10 10:01:15.694633145 +0000 UTC m=+0.036557641 container create 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:01:15 compute-0 python3.9[219361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20830adcf8f33eddaf935d96ad9d00cc424a7a0315714589237ad69b8d548a22/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20830adcf8f33eddaf935d96ad9d00cc424a7a0315714589237ad69b8d548a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20830adcf8f33eddaf935d96ad9d00cc424a7a0315714589237ad69b8d548a22/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20830adcf8f33eddaf935d96ad9d00cc424a7a0315714589237ad69b8d548a22/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:15 compute-0 ceph-mon[73551]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:15 compute-0 podman[219417]: 2025-10-10 10:01:15.771829424 +0000 UTC m=+0.113753950 container init 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:15 compute-0 podman[219417]: 2025-10-10 10:01:15.678185038 +0000 UTC m=+0.020109564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:15 compute-0 podman[219417]: 2025-10-10 10:01:15.777044338 +0000 UTC m=+0.118968844 container start 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:15 compute-0 bash[219417]: 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:01:15 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:15 compute-0 sudo[219358]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.856360964 +0000 UTC m=+0.043656265 container create f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:01:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:15 compute-0 systemd[1]: Started libpod-conmon-f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457.scope.
Oct 10 10:01:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.838267065 +0000 UTC m=+0.025562386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.943304389 +0000 UTC m=+0.130599720 container init f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.953312535 +0000 UTC m=+0.140607836 container start f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.956936408 +0000 UTC m=+0.144231699 container attach f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:15 compute-0 compassionate_bhaskara[219529]: 167 167
Oct 10 10:01:15 compute-0 systemd[1]: libpod-f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457.scope: Deactivated successfully.
Oct 10 10:01:15 compute-0 podman[219453]: 2025-10-10 10:01:15.960515141 +0000 UTC m=+0.147810432 container died f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ce53857e5f4dd3be1b1728a15ade5f46bd13ab321de23ff54a832943a41449-merged.mount: Deactivated successfully.
Oct 10 10:01:16 compute-0 podman[219453]: 2025-10-10 10:01:15.992312302 +0000 UTC m=+0.179607603 container remove f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:01:16 compute-0 systemd[1]: libpod-conmon-f5085e2c20fbd6b967727ac6e28af1f9e7f5c619ad159f501f028e7ab9e4d457.scope: Deactivated successfully.
Oct 10 10:01:16 compute-0 sudo[219598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jedgbgkucwxxvqgwosvbeapxkolqxgki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090475.1987298-3755-140524919456683/AnsiballZ_file.py'
Oct 10 10:01:16 compute-0 sudo[219598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:16.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.205729316 +0000 UTC m=+0.048480385 container create 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:16 compute-0 systemd[1]: Started libpod-conmon-84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436.scope.
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:01:16
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'images', '.rgw.root', 'vms', '.mgr']
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:01:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.187214424 +0000 UTC m=+0.029965503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c3cc30eecc8aa1bacbba8d4ad53e638e215e11ee84975c42f4cdb4250683c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c3cc30eecc8aa1bacbba8d4ad53e638e215e11ee84975c42f4cdb4250683c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c3cc30eecc8aa1bacbba8d4ad53e638e215e11ee84975c42f4cdb4250683c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c3cc30eecc8aa1bacbba8d4ad53e638e215e11ee84975c42f4cdb4250683c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:16 compute-0 python3.9[219601]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.30529658 +0000 UTC m=+0.148047679 container init 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:01:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:01:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.320634602 +0000 UTC m=+0.163385681 container start 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.325160655 +0000 UTC m=+0.167911734 container attach 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:01:16 compute-0 sudo[219598]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:01:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]: {
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:     "0": [
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:         {
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "devices": [
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "/dev/loop3"
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             ],
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "lv_name": "ceph_lv0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "lv_size": "21470642176",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "name": "ceph_lv0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "tags": {
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.cluster_name": "ceph",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.crush_device_class": "",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.encrypted": "0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.osd_id": "0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.type": "block",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.vdo": "0",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:                 "ceph.with_tpm": "0"
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             },
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "type": "block",
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:             "vg_name": "ceph_vg0"
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:         }
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]:     ]
Oct 10 10:01:16 compute-0 hopeful_johnson[219623]: }
Oct 10 10:01:16 compute-0 systemd[1]: libpod-84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436.scope: Deactivated successfully.
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.644859525 +0000 UTC m=+0.487610594 container died 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 10:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c3cc30eecc8aa1bacbba8d4ad53e638e215e11ee84975c42f4cdb4250683c9-merged.mount: Deactivated successfully.
Oct 10 10:01:16 compute-0 podman[219607]: 2025-10-10 10:01:16.687445824 +0000 UTC m=+0.530196893 container remove 84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:01:16 compute-0 systemd[1]: libpod-conmon-84f2decce6a03dbc3a274505b82a53823e276b5d622a268ab0daa924a0aad436.scope: Deactivated successfully.
Oct 10 10:01:16 compute-0 sudo[219249]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:16 compute-0 sudo[219702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:01:16 compute-0 sudo[219702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:16 compute-0 sudo[219702]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:16 compute-0 sudo[219745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:01:16 compute-0 sudo[219745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:17 compute-0 sudo[219843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngoaqbceglqhfebijksyofbvvrkovque ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090476.7191215-3791-21595946340959/AnsiballZ_stat.py'
Oct 10 10:01:17 compute-0 sudo[219843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:17.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:01:17 compute-0 python3.9[219845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:17 compute-0 sudo[219843]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.327040279 +0000 UTC m=+0.049864730 container create b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:01:17 compute-0 systemd[1]: Started libpod-conmon-b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1.scope.
Oct 10 10:01:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:17] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:01:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:17] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.306263695 +0000 UTC m=+0.029088186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:17 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.426118466 +0000 UTC m=+0.148942967 container init b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.437542876 +0000 UTC m=+0.160367347 container start b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.441351546 +0000 UTC m=+0.164176017 container attach b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:01:17 compute-0 sleepy_chebyshev[219928]: 167 167
Oct 10 10:01:17 compute-0 systemd[1]: libpod-b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1.scope: Deactivated successfully.
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.445682702 +0000 UTC m=+0.168507153 container died b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 10:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b44d1784656cccfe3daa637eea8630b554a71a692885af4f94593c7854edcc-merged.mount: Deactivated successfully.
Oct 10 10:01:17 compute-0 podman[219887]: 2025-10-10 10:01:17.485458424 +0000 UTC m=+0.208282865 container remove b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:17 compute-0 systemd[1]: libpod-conmon-b8814046755bbf9f21c025ab6da20ea00ffce579183094df171af92434aaf1f1.scope: Deactivated successfully.
Oct 10 10:01:17 compute-0 sudo[219993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcdapfzxuuwbdewxvtecxxkrweprzrbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090476.7191215-3791-21595946340959/AnsiballZ_file.py'
Oct 10 10:01:17 compute-0 sudo[219993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:17 compute-0 podman[220003]: 2025-10-10 10:01:17.689597837 +0000 UTC m=+0.073358469 container create 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:17 compute-0 python3.9[219997]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:17 compute-0 sudo[219993]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:17 compute-0 podman[220003]: 2025-10-10 10:01:17.64233935 +0000 UTC m=+0.026099982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:17 compute-0 systemd[1]: Started libpod-conmon-4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077.scope.
Oct 10 10:01:17 compute-0 ceph-mon[73551]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:17 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a4937878c22662046725043a5d91fb1916e077fa17fd4f63fd32732765f0547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a4937878c22662046725043a5d91fb1916e077fa17fd4f63fd32732765f0547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a4937878c22662046725043a5d91fb1916e077fa17fd4f63fd32732765f0547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a4937878c22662046725043a5d91fb1916e077fa17fd4f63fd32732765f0547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:17 compute-0 podman[220003]: 2025-10-10 10:01:17.818569375 +0000 UTC m=+0.202329987 container init 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:01:17 compute-0 podman[220003]: 2025-10-10 10:01:17.826568767 +0000 UTC m=+0.210329369 container start 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:17 compute-0 podman[220003]: 2025-10-10 10:01:17.829648924 +0000 UTC m=+0.213409576 container attach 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:18.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:18 compute-0 sudo[220243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtrwdpiqsflkgccpcotsnijpsxbprxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090478.1125169-3827-170479034989693/AnsiballZ_stat.py'
Oct 10 10:01:18 compute-0 sudo[220243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:18 compute-0 lvm[220247]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:01:18 compute-0 lvm[220247]: VG ceph_vg0 finished
Oct 10 10:01:18 compute-0 beautiful_dijkstra[220032]: {}
Oct 10 10:01:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:18 compute-0 podman[220003]: 2025-10-10 10:01:18.570529936 +0000 UTC m=+0.954290558 container died 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:01:18 compute-0 systemd[1]: libpod-4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077.scope: Deactivated successfully.
Oct 10 10:01:18 compute-0 systemd[1]: libpod-4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077.scope: Consumed 1.150s CPU time.
Oct 10 10:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a4937878c22662046725043a5d91fb1916e077fa17fd4f63fd32732765f0547-merged.mount: Deactivated successfully.
Oct 10 10:01:18 compute-0 python3.9[220245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:18 compute-0 podman[220003]: 2025-10-10 10:01:18.626965882 +0000 UTC m=+1.010726484 container remove 4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:18 compute-0 systemd[1]: libpod-conmon-4928e7e7d2969ad6e8301519a3cab898b33392702bd4a486f6f5203a28a1d077.scope: Deactivated successfully.
Oct 10 10:01:18 compute-0 sudo[220243]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-0 sudo[219745]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:01:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:01:18 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:18 compute-0 sudo[220271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:01:18 compute-0 sudo[220271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:18 compute-0 sudo[220271]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-0 sudo[220362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pckbcamzpnwphhnkkqoiyturrblzqcyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090478.1125169-3827-170479034989693/AnsiballZ_file.py'
Oct 10 10:01:18 compute-0 sudo[220362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:19 compute-0 python3.9[220364]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:19 compute-0 sudo[220362]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:19 compute-0 ceph-mon[73551]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:19 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:19 compute-0 sudo[220514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjclqxhgewghaloqupgksptyxgplceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090479.495648-3863-177481525174551/AnsiballZ_stat.py'
Oct 10 10:01:19 compute-0 sudo[220514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:19 compute-0 python3.9[220516]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:20 compute-0 sudo[220514]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:20 compute-0 sudo[220593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveulewpdordqmdqzgwnglctlmbnhrvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090479.495648-3863-177481525174551/AnsiballZ_file.py'
Oct 10 10:01:20 compute-0 sudo[220593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:20 compute-0 python3.9[220595]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:20 compute-0 sudo[220593]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:21 compute-0 sudo[220746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcahxskxidnimkdktgaorgopyfrxamsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090480.771554-3899-211183343908179/AnsiballZ_stat.py'
Oct 10 10:01:21 compute-0 sudo[220746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:21 compute-0 python3.9[220748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:21 compute-0 sudo[220746]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:21 compute-0 ceph-mon[73551]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:21 compute-0 sudo[220871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubsowiejtpcjamthypwfszrmlsqebzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090480.771554-3899-211183343908179/AnsiballZ_copy.py'
Oct 10 10:01:21 compute-0 sudo[220871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:21 compute-0 python3.9[220873]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090480.771554-3899-211183343908179/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:22 compute-0 sudo[220871]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 10 10:01:22 compute-0 sudo[221024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrpxlwuglkfcjcaaghrrqqwfjodzszcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090482.3539917-3944-254684709791941/AnsiballZ_file.py'
Oct 10 10:01:22 compute-0 sudo[221024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:22 compute-0 python3.9[221026]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:22 compute-0 sudo[221024]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:23 compute-0 sudo[221177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnvzqoflcytyusarqzqcmemztrkmvlzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090483.141698-3968-11837633003552/AnsiballZ_command.py'
Oct 10 10:01:23 compute-0 sudo[221177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:23 compute-0 python3.9[221179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:23 compute-0 sudo[221177]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:23 compute-0 ceph-mon[73551]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 10 10:01:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.786695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483786729, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 252, "total_data_size": 823218, "memory_usage": 835376, "flush_reason": "Manual Compaction"}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483793298, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 571764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17370, "largest_seqno": 17980, "table_properties": {"data_size": 568864, "index_size": 872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7595, "raw_average_key_size": 19, "raw_value_size": 562820, "raw_average_value_size": 1481, "num_data_blocks": 38, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090442, "oldest_key_time": 1760090442, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6702 microseconds, and 3034 cpu microseconds.
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.793392) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 571764 bytes OK
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.793421) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.795180) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.795201) EVENT_LOG_v1 {"time_micros": 1760090483795194, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.795228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 819961, prev total WAL file size 819961, number of live WAL files 2.
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.795814) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(558KB)], [35(14MB)]
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483795848, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16085760, "oldest_snapshot_seqno": -1}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4948 keys, 12216982 bytes, temperature: kUnknown
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483871060, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12216982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12183355, "index_size": 20141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 124665, "raw_average_key_size": 25, "raw_value_size": 12093010, "raw_average_value_size": 2444, "num_data_blocks": 840, "num_entries": 4948, "num_filter_entries": 4948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.871418) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12216982 bytes
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.872827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.6 rd, 162.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(49.5) write-amplify(21.4) OK, records in: 5450, records dropped: 502 output_compression: NoCompression
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.872853) EVENT_LOG_v1 {"time_micros": 1760090483872841, "job": 16, "event": "compaction_finished", "compaction_time_micros": 75315, "compaction_time_cpu_micros": 23895, "output_level": 6, "num_output_files": 1, "total_output_size": 12216982, "num_input_records": 5450, "num_output_records": 4948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483873098, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483876418, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.795740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.876513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.876522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.876527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.876531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:01:23.876535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:24.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100124 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:24.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:24 compute-0 sudo[221333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fduwwfsuyxdusxmsqozlyoddfciihtym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090484.0266526-3992-227135847074469/AnsiballZ_blockinfile.py'
Oct 10 10:01:24 compute-0 sudo[221333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:24 compute-0 python3.9[221335]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:24 compute-0 sudo[221333]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:25 compute-0 sudo[221486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyfhgyzykbpfvqrwwxuejevmrfvllzeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090485.0820432-4019-129308249866840/AnsiballZ_command.py'
Oct 10 10:01:25 compute-0 sudo[221486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:25 compute-0 python3.9[221488]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:25 compute-0 sudo[221486]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:25 compute-0 ceph-mon[73551]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:26 compute-0 sudo[221640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmtrwthrpbjawvguznwgyvblryysfgov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090485.9625282-4043-21530638453718/AnsiballZ_stat.py'
Oct 10 10:01:26 compute-0 sudo[221640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:26 compute-0 python3.9[221642]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:26 compute-0 sudo[221640]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:27.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:01:27 compute-0 sudo[221795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbwgkeqhyqzoinjpfeqnlsifgjlumym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090486.7783535-4067-226325009157126/AnsiballZ_command.py'
Oct 10 10:01:27 compute-0 sudo[221795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:27 compute-0 python3.9[221797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:27 compute-0 sudo[221795]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:27] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:01:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:27] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:01:27 compute-0 ceph-mon[73551]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:27 compute-0 sudo[221950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvwibpbsqfohnnyppttkvkkvrpuptot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090487.6717813-4091-12496424953035/AnsiballZ_file.py'
Oct 10 10:01:27 compute-0 sudo[221950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000012:nfs.cephfs.2: -2
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:01:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:28 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:28 compute-0 python3.9[221952]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:28 compute-0 sudo[221950]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 10 10:01:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:28 compute-0 sudo[222116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclarwprtxaqxudqqqpijowknvuadmmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090488.5323582-4115-276051642360012/AnsiballZ_stat.py'
Oct 10 10:01:28 compute-0 sudo[222116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:29 compute-0 python3.9[222118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:29 compute-0 sudo[222116]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:29 compute-0 sudo[222242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxvkevoxmrtipmgysoedycgyjwpfgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090488.5323582-4115-276051642360012/AnsiballZ_copy.py'
Oct 10 10:01:29 compute-0 sudo[222242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:29 compute-0 python3.9[222244]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090488.5323582-4115-276051642360012/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:29 compute-0 sudo[222242]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:29 compute-0 ceph-mon[73551]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 10 10:01:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:30.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:30 compute-0 sudo[222395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjeynqpcmnlozrrkxnphwobcxwzyodb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090490.011566-4160-178600441085042/AnsiballZ_stat.py'
Oct 10 10:01:30 compute-0 sudo[222395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:30 compute-0 python3.9[222397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:30 compute-0 sudo[222395]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:30 compute-0 sudo[222519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvykwsohxugqkgshpzfauqbggjjipzhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090490.011566-4160-178600441085042/AnsiballZ_copy.py'
Oct 10 10:01:30 compute-0 sudo[222519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:31 compute-0 python3.9[222521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090490.011566-4160-178600441085042/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100131 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:31 compute-0 sudo[222519]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:01:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:31 compute-0 sudo[222671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvilrdfnmqwnwbgqjphgtixhkjlhbody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090491.5388484-4205-279676945509131/AnsiballZ_stat.py'
Oct 10 10:01:31 compute-0 sudo[222671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:31 compute-0 ceph-mon[73551]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:32 compute-0 python3.9[222673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:32 compute-0 sudo[222671]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:32.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:32 compute-0 sudo[222795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psiwpdqhstxoxnpcpbgrzhkesmpbktve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090491.5388484-4205-279676945509131/AnsiballZ_copy.py'
Oct 10 10:01:32 compute-0 sudo[222795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:32 compute-0 python3.9[222797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090491.5388484-4205-279676945509131/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:32 compute-0 sudo[222795]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:33 compute-0 sudo[222948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbprqqrgllknzwtimmvzgqpiqsnlzthd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090493.0916946-4250-149082634884031/AnsiballZ_systemd.py'
Oct 10 10:01:33 compute-0 sudo[222948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:33 compute-0 python3.9[222950]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:33 compute-0 systemd[1]: Reloading.
Oct 10 10:01:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:33 compute-0 ceph-mon[73551]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:33 compute-0 systemd-rc-local-generator[222977]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:33 compute-0 systemd-sysv-generator[222980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:34.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:34 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 10 10:01:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100134 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:34 compute-0 sudo[222948]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:34 compute-0 sudo[222992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:34 compute-0 sudo[222992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:34 compute-0 sudo[222992]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:34 compute-0 sudo[223166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvekvdpylbpyjzcbgsbxstfjndglobiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090494.4471612-4274-165094923851627/AnsiballZ_systemd.py'
Oct 10 10:01:34 compute-0 sudo[223166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:35 compute-0 python3.9[223168]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 10:01:35 compute-0 systemd[1]: Reloading.
Oct 10 10:01:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:35 compute-0 systemd-rc-local-generator[223191]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:35 compute-0 systemd-sysv-generator[223198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:35 compute-0 systemd[1]: Reloading.
Oct 10 10:01:35 compute-0 systemd-rc-local-generator[223234]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:35 compute-0 systemd-sysv-generator[223238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:35 compute-0 sudo[223166]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:35 compute-0 ceph-mon[73551]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:36.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:36.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:36 compute-0 sshd-session[163050]: Connection closed by 192.168.122.30 port 37382
Oct 10 10:01:36 compute-0 sshd-session[163047]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:01:36 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 10 10:01:36 compute-0 systemd[1]: session-54.scope: Consumed 3min 45.305s CPU time.
Oct 10 10:01:36 compute-0 systemd-logind[806]: Session 54 logged out. Waiting for processes to exit.
Oct 10 10:01:36 compute-0 systemd-logind[806]: Removed session 54.
Oct 10 10:01:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:37.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:37.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:37.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:37] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:01:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:37] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:01:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:37 compute-0 ceph-mon[73551]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:38.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:38 compute-0 podman[223269]: 2025-10-10 10:01:38.317714074 +0000 UTC m=+0.155507484 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:01:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:39 compute-0 ceph-mon[73551]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:40.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:40.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:41 compute-0 sshd-session[223299]: Accepted publickey for zuul from 192.168.122.30 port 52458 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:01:41 compute-0 systemd-logind[806]: New session 55 of user zuul.
Oct 10 10:01:41 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 10 10:01:41 compute-0 sshd-session[223299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:01:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:01:41.881 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:01:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:01:41.882 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:01:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:01:41.882 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:01:41 compute-0 ceph-mon[73551]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:42.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:42.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:42 compute-0 python3.9[223453]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 10:01:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:42 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:43 compute-0 ceph-mon[73551]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:43 compute-0 sudo[223608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmoztqkjnhtcfkljuxjcljfninskjpkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090503.4454105-62-58779194045923/AnsiballZ_file.py'
Oct 10 10:01:43 compute-0 sudo[223608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:44 compute-0 python3.9[223610]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:44 compute-0 sudo[223608]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:44.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:44 compute-0 sudo[223774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmskfwdgkqnxxbzfaschipwgrgmwhvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090504.4078166-62-198156442185706/AnsiballZ_file.py'
Oct 10 10:01:44 compute-0 sudo[223774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:44 compute-0 podman[223735]: 2025-10-10 10:01:44.724269208 +0000 UTC m=+0.059129802 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:01:44 compute-0 python3.9[223782]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:44 compute-0 sudo[223774]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:45 compute-0 sudo[223933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvickkpwgpjcyojvjnizwiwfabwisxzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090505.0917025-62-214560505461242/AnsiballZ_file.py'
Oct 10 10:01:45 compute-0 sudo[223933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:45 compute-0 python3.9[223935]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:45 compute-0 sudo[223933]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:45 compute-0 ceph-mon[73551]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:46 compute-0 sudo[224085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlxmseaqzgxheyrtkuxtvlvtrgigoea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090505.7541249-62-32499399928536/AnsiballZ_file.py'
Oct 10 10:01:46 compute-0 sudo[224085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:46.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:46.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:46 compute-0 python3.9[224088]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:01:46 compute-0 sudo[224085]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:01:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:01:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:46 compute-0 sudo[224238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrazlmqaowtrmazehnolvhtnmhrlxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090506.431456-62-203451425066874/AnsiballZ_file.py'
Oct 10 10:01:46 compute-0 sudo[224238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:46 compute-0 python3.9[224240]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:46 compute-0 sudo[224238]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:47.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:47.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:47.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:01:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:01:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:47 compute-0 ceph-mon[73551]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:48 compute-0 sudo[224391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bssfzvgnppmczrrfotkjwfzfprsrccpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090507.5600047-170-145985511552652/AnsiballZ_stat.py'
Oct 10 10:01:48 compute-0 sudo[224391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:01:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:01:48 compute-0 python3.9[224393]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:48.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:48 compute-0 sudo[224391]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:48 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:01:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:49 compute-0 sudo[224547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjlezkhtiuiyoepovqdeeaortjfglft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090508.5127938-194-183767363068670/AnsiballZ_systemd.py'
Oct 10 10:01:49 compute-0 sudo[224547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:49 compute-0 python3.9[224549]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:49 compute-0 systemd[1]: Reloading.
Oct 10 10:01:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:49 compute-0 systemd-sysv-generator[224583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:49 compute-0 systemd-rc-local-generator[224579]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:49 compute-0 sudo[224547]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:50 compute-0 ceph-mon[73551]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:50.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:50.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:50 compute-0 sudo[224737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfkluwpfxgkkwsscfgqeczceaglvcrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090510.1363993-218-168075585683821/AnsiballZ_service_facts.py'
Oct 10 10:01:50 compute-0 sudo[224737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:50 compute-0 python3.9[224739]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:01:50 compute-0 network[224757]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:01:50 compute-0 network[224758]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:01:50 compute-0 network[224759]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:01:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:52 compute-0 ceph-mon[73551]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:54 compute-0 ceph-mon[73551]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100154 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:54 compute-0 sudo[224827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:54 compute-0 sudo[224827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:54 compute-0 sudo[224827]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:55 compute-0 sudo[224737]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:56 compute-0 ceph-mon[73551]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:01:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:56.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:01:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:56 compute-0 sudo[225062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzwkaxzqyyfaiwvnujlnqklxyjhwkxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090516.4852247-242-204473160030022/AnsiballZ_systemd.py'
Oct 10 10:01:56 compute-0 sudo[225062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:01:57.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:01:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:57 compute-0 python3.9[225064]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:57 compute-0 systemd[1]: Reloading.
Oct 10 10:01:57 compute-0 systemd-sysv-generator[225096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:57 compute-0 systemd-rc-local-generator[225092]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:01:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:01:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:01:57 compute-0 sudo[225062]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:58 compute-0 ceph-mon[73551]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:01:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:01:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:01:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:58 compute-0 python3.9[225253]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:59 compute-0 sudo[225404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhmdaxvenimatfvshdrslzaxhqxojfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090518.6339216-293-197650944769649/AnsiballZ_podman_container.py'
Oct 10 10:01:59 compute-0 sudo[225404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:59 compute-0 python3.9[225406]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:01:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:01:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:01:59 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:02:00 compute-0 ceph-mon[73551]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:00.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:00 compute-0 podman[225420]: 2025-10-10 10:02:00.887737025 +0000 UTC m=+1.356725797 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.0197024 +0000 UTC m=+0.040253117 container create 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.0594] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-0 kernel: veth0: entered allmulticast mode
Oct 10 10:02:01 compute-0 kernel: veth0: entered promiscuous mode
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.0748] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.0766] device (veth0): carrier: link connected
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.0769] device (podman0): carrier: link connected
Oct 10 10:02:01 compute-0 systemd-udevd[225506]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:02:01 compute-0 systemd-udevd[225509]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.000025541 +0000 UTC m=+0.020576258 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1039] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1067] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1082] device (podman0): Activation: starting connection 'podman0' (5e038de9-901b-4fcc-b7f8-1d25f2764cd0)
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1089] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1096] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1101] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1109] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 10:02:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1406] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1408] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.1416] device (podman0): Activation: successful, device activated.
Oct 10 10:02:01 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 10 10:02:01 compute-0 ceph-mon[73551]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:02:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:01 compute-0 systemd[1]: Started libpod-conmon-22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71.scope.
Oct 10 10:02:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.420931663 +0000 UTC m=+0.441482470 container init 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.432782791 +0000 UTC m=+0.453333528 container start 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.436258412 +0000 UTC m=+0.456809149 container attach 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:02:01 compute-0 iscsid_config[225640]: iqn.1994-05.com.redhat:d7977a3a13b0
Oct 10 10:02:01 compute-0 systemd[1]: libpod-22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71.scope: Deactivated successfully.
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.440724835 +0000 UTC m=+0.461275562 container died 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 10 10:02:01 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 10 10:02:01 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-0 NetworkManager[44849]: <info>  [1760090521.5029] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:02:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:01 compute-0 systemd[1]: run-netns-netns\x2d8f0af16b\x2dad2c\x2df8ec\x2d08f6\x2d571ffb1bbfec.mount: Deactivated successfully.
Oct 10 10:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b0382285ae78b38f3e4525be0e39a97ce79af4885636e8873a63db689b1f24-merged.mount: Deactivated successfully.
Oct 10 10:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71-userdata-shm.mount: Deactivated successfully.
Oct 10 10:02:01 compute-0 podman[225481]: 2025-10-10 10:02:01.874713585 +0000 UTC m=+0.895264302 container remove 22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 10 10:02:01 compute-0 python3.9[225406]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 10 10:02:01 compute-0 systemd[1]: libpod-conmon-22ce6ffbb2e7719f1af48028993e9b631632f921aa751571c1208475bc20ec71.scope: Deactivated successfully.
Oct 10 10:02:02 compute-0 python3.9[225406]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 10 10:02:02 compute-0 sudo[225404]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:02.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:02 compute-0 sudo[225883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpuvxhllaiwmzdnkvxuahxuelcbkrpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090522.5460787-317-63469746703378/AnsiballZ_stat.py'
Oct 10 10:02:02 compute-0 sudo[225883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:03 compute-0 python3.9[225885]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:03 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:03 compute-0 ceph-mon[73551]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:03 compute-0 sudo[225883]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:03 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:03 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:03 compute-0 sudo[226006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwqrbhxtlatniudbggccmsodjkongipu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090522.5460787-317-63469746703378/AnsiballZ_copy.py'
Oct 10 10:02:03 compute-0 sudo[226006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:03 compute-0 python3.9[226008]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090522.5460787-317-63469746703378/.source.iscsi _original_basename=.ioetm3di follow=False checksum=0035888035c2120454cd66d7e148d2dfd71308d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:03 compute-0 sudo[226006]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:04 compute-0 sudo[226159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsndjpffpdjurovcchncptkjulokrrzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090524.2132993-362-3300062394747/AnsiballZ_file.py'
Oct 10 10:02:04 compute-0 sudo[226159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:04 compute-0 python3.9[226161]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:04 compute-0 sudo[226159]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:05 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:05 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:05 compute-0 python3.9[226312]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:05 compute-0 ceph-mon[73551]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:05 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:06.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:06 compute-0 sudo[226465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcnxfryxomiujqydroxwjkffthfyyzvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090525.881292-413-130181516890658/AnsiballZ_lineinfile.py'
Oct 10 10:02:06 compute-0 sudo[226465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:06 compute-0 python3.9[226467]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:06 compute-0 sudo[226465]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:07.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:02:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:07 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:07 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:07 compute-0 sudo[226618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqohblfxproxbdlkmgdsojndrgqxhzzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090526.9878745-440-260214207590656/AnsiballZ_file.py'
Oct 10 10:02:07 compute-0 sudo[226618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:07] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:07] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:07 compute-0 python3.9[226620]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:07 compute-0 sudo[226618]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:07 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:07 compute-0 ceph-mon[73551]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:08 compute-0 sudo[226771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogjuayxewqhslxdejbdygvcvjriqyqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090527.792203-464-86392790288310/AnsiballZ_stat.py'
Oct 10 10:02:08 compute-0 sudo[226771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:08.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:08 compute-0 python3.9[226773]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:08 compute-0 sudo[226771]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:08 compute-0 sudo[226861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtvxymiozzvtiyagleoqpyalwzedpumo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090527.792203-464-86392790288310/AnsiballZ_file.py'
Oct 10 10:02:08 compute-0 sudo[226861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:08 compute-0 podman[226823]: 2025-10-10 10:02:08.726981161 +0000 UTC m=+0.130673304 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:02:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:08 compute-0 python3.9[226869]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:08 compute-0 sudo[226861]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:09 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:09 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:09 compute-0 sudo[227027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhyqfwhprjskfkpispvlrdvjtsgroore ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090529.053626-464-32475087882477/AnsiballZ_stat.py'
Oct 10 10:02:09 compute-0 sudo[227027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:09 compute-0 python3.9[227029]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:09 compute-0 sudo[227027]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:09 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:09 compute-0 ceph-mon[73551]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:09 compute-0 sudo[227105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubfnjqcytsyojfkzmwodrlcnjvdrtqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090529.053626-464-32475087882477/AnsiballZ_file.py'
Oct 10 10:02:09 compute-0 sudo[227105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:10 compute-0 python3.9[227107]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:10 compute-0 sudo[227105]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:10 compute-0 sudo[227259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgcdslaodujumkdborpryvwjxueahnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090530.5024087-533-138274389352771/AnsiballZ_file.py'
Oct 10 10:02:10 compute-0 sudo[227259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:11 compute-0 python3.9[227261]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:11 compute-0 sudo[227259]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:11 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:11 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:11 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 10:02:11 compute-0 sudo[227411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzlukqmqchtilmrlgppmoobsaotcqpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090531.2683694-557-220965741746676/AnsiballZ_stat.py'
Oct 10 10:02:11 compute-0 sudo[227411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:11 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:11 compute-0 ceph-mon[73551]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:11 compute-0 python3.9[227413]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:11 compute-0 sudo[227411]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:12 compute-0 sudo[227490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqwuiezyjkcxtajsyxzfirggyiyacft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090531.2683694-557-220965741746676/AnsiballZ_file.py'
Oct 10 10:02:12 compute-0 sudo[227490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:12 compute-0 python3.9[227492]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:12 compute-0 sudo[227490]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:12 compute-0 sudo[227643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izirnhjsidfupuppduzllneemdxuwarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090532.693537-593-252701522935095/AnsiballZ_stat.py'
Oct 10 10:02:12 compute-0 sudo[227643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:13 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef0001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:13 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:13 compute-0 python3.9[227645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:13 compute-0 sudo[227643]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:13 compute-0 sudo[227721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epvttzterywnjekagsuvrloiwifdklih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090532.693537-593-252701522935095/AnsiballZ_file.py'
Oct 10 10:02:13 compute-0 sudo[227721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:13 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:13 compute-0 ceph-mon[73551]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:13 compute-0 python3.9[227723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:13 compute-0 sudo[227721]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:14.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:14 compute-0 sudo[227887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafpbqsphhisadlezqqkchtnwwgdxfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090534.067011-629-34500079060462/AnsiballZ_systemd.py'
Oct 10 10:02:14 compute-0 sudo[227887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:14 compute-0 sudo[227863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:14 compute-0 sudo[227863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:14 compute-0 sudo[227863]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:14 compute-0 python3.9[227899]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:14 compute-0 systemd[1]: Reloading.
Oct 10 10:02:14 compute-0 systemd-sysv-generator[227951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:14 compute-0 podman[227904]: 2025-10-10 10:02:14.954866697 +0000 UTC m=+0.089417786 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:02:14 compute-0 systemd-rc-local-generator[227948]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:15 compute-0 sudo[227887]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:15 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:15 compute-0 ceph-mon[73551]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:15 compute-0 sudo[228107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bevokcwonidiavjbxhkdmccpwimrxrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090535.458782-653-245279839861678/AnsiballZ_stat.py'
Oct 10 10:02:15 compute-0 sudo[228107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:15 compute-0 python3.9[228109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:16 compute-0 sudo[228107]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 10:02:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:16.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 10:02:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100216 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:02:16
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.nfs', '.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.meta']
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:02:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:16 compute-0 sudo[228186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzucftfzubtydxdidbkhicxktohxdcrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090535.458782-653-245279839861678/AnsiballZ_file.py'
Oct 10 10:02:16 compute-0 sudo[228186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:02:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:16 compute-0 python3.9[228188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:16 compute-0 sudo[228186]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:02:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:17.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:02:17 compute-0 sudo[228339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfjyluysvtrzikrvnpslecyecrcidjkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090536.8388038-689-44978458056346/AnsiballZ_stat.py'
Oct 10 10:02:17 compute-0 sudo[228339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:17 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:17 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:17 compute-0 python3.9[228341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:17] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:17] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:17 compute-0 sudo[228339]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:17 compute-0 sudo[228417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-romrnnegiyfggrlamytpkemsjwwtkkgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090536.8388038-689-44978458056346/AnsiballZ_file.py'
Oct 10 10:02:17 compute-0 sudo[228417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:17 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:17 compute-0 ceph-mon[73551]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:17 compute-0 python3.9[228419]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:17 compute-0 sudo[228417]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:18.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:18 compute-0 sudo[228570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowzhwqozqteeamyzcirbyhntihqtnuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090538.1623037-725-76069065763258/AnsiballZ_systemd.py'
Oct 10 10:02:18 compute-0 sudo[228570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:18 compute-0 python3.9[228572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:18 compute-0 systemd[1]: Reloading.
Oct 10 10:02:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:18 compute-0 systemd-rc-local-generator[228599]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:18 compute-0 systemd-sysv-generator[228604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:19 compute-0 sudo[228608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:02:19 compute-0 sudo[228608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:19 compute-0 sudo[228608]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:19 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:19 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 10:02:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:19 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:19 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 10:02:19 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 10:02:19 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 10:02:19 compute-0 sudo[228635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:02:19 compute-0 sudo[228635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:19 compute-0 sudo[228570]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:19 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:19 compute-0 ceph-mon[73551]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:19 compute-0 sudo[228635]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:19 compute-0 sudo[228845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzddqnlxcmdfxcflartdgpiyeieuuqdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090539.6583352-755-126064642586024/AnsiballZ_file.py'
Oct 10 10:02:19 compute-0 sudo[228845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-0 sudo[228849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:02:20 compute-0 sudo[228849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:20 compute-0 sudo[228849]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:20.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:20 compute-0 python3.9[228847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:20 compute-0 sudo[228845]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:20 compute-0 sudo[228874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:02:20 compute-0 sudo[228874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.641467068 +0000 UTC m=+0.049931855 container create e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:02:20 compute-0 systemd[1]: Started libpod-conmon-e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a.scope.
Oct 10 10:02:20 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.618511995 +0000 UTC m=+0.026976812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.722405013 +0000 UTC m=+0.130869830 container init e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.730432549 +0000 UTC m=+0.138897336 container start e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.734348024 +0000 UTC m=+0.142812831 container attach e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 10:02:20 compute-0 relaxed_hodgkin[229063]: 167 167
Oct 10 10:02:20 compute-0 systemd[1]: libpod-e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a.scope: Deactivated successfully.
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.737226856 +0000 UTC m=+0.145691653 container died e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 10:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-402071d377cf06f5c6966fdd7f43edf5f797ba0e3fccecc57c485b9cb8ad7f29-merged.mount: Deactivated successfully.
Oct 10 10:02:20 compute-0 podman[229015]: 2025-10-10 10:02:20.782872564 +0000 UTC m=+0.191337351 container remove e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:02:20 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-0 systemd[1]: libpod-conmon-e415ec230b424b67cdd7d67acbe2583a35a220b7969a7ee336c1cec3b919f37a.scope: Deactivated successfully.
Oct 10 10:02:20 compute-0 sudo[229121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjopzqhxlyeyjofifhknkmdtedpglwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090540.4849665-779-67416427188024/AnsiballZ_stat.py'
Oct 10 10:02:20 compute-0 sudo[229121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:20 compute-0 podman[229133]: 2025-10-10 10:02:20.961988274 +0000 UTC m=+0.048210100 container create 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:02:21 compute-0 systemd[1]: Started libpod-conmon-4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028.scope.
Oct 10 10:02:21 compute-0 python3.9[229127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:20.940604871 +0000 UTC m=+0.026826687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:21 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-0 sudo[229121]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:21.074524687 +0000 UTC m=+0.160746533 container init 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:21.08398357 +0000 UTC m=+0.170205366 container start 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:21.087518563 +0000 UTC m=+0.173740379 container attach 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:02:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef00095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:21 compute-0 sudo[229283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtesmpmrauyoxidrrkfexqmwejqyekhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090540.4849665-779-67416427188024/AnsiballZ_copy.py'
Oct 10 10:02:21 compute-0 sudo[229283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:21 compute-0 brave_elgamal[229150]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:02:21 compute-0 brave_elgamal[229150]: --> All data devices are unavailable
Oct 10 10:02:21 compute-0 systemd[1]: libpod-4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028.scope: Deactivated successfully.
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:21.444084029 +0000 UTC m=+0.530305815 container died 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 10 10:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f568bec64a73f0a6343ca898dd62c8ff8d75fd8c17869a40c959dbff36134a0c-merged.mount: Deactivated successfully.
Oct 10 10:02:21 compute-0 podman[229133]: 2025-10-10 10:02:21.492678821 +0000 UTC m=+0.578900617 container remove 4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:02:21 compute-0 systemd[1]: libpod-conmon-4eb1874aca5accf0ed6a256c989a62f1fc881e24e1b62057b40379c957ada028.scope: Deactivated successfully.
Oct 10 10:02:21 compute-0 sudo[228874]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:21 compute-0 sudo[229298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:02:21 compute-0 python3.9[229287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090540.4849665-779-67416427188024/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:21 compute-0 sudo[229298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:21 compute-0 sudo[229298]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:21 compute-0 sudo[229283]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:21 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:21 compute-0 sudo[229323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:02:21 compute-0 sudo[229323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:22 compute-0 ceph-mon[73551]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.15492852 +0000 UTC m=+0.054274994 container create 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:02:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:22.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:22 compute-0 systemd[1]: Started libpod-conmon-9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385.scope.
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.127428102 +0000 UTC m=+0.026774626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.245390389 +0000 UTC m=+0.144736913 container init 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.256799293 +0000 UTC m=+0.156145777 container start 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.260829382 +0000 UTC m=+0.160175896 container attach 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:02:22 compute-0 youthful_sanderson[229430]: 167 167
Oct 10 10:02:22 compute-0 systemd[1]: libpod-9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385.scope: Deactivated successfully.
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.266235275 +0000 UTC m=+0.165581769 container died 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 10 10:02:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c112f34aa5a7f33f219fd8d59cf0b47045800d86fc8e66958c483b6068e79caa-merged.mount: Deactivated successfully.
Oct 10 10:02:22 compute-0 podman[229414]: 2025-10-10 10:02:22.317163241 +0000 UTC m=+0.216509715 container remove 9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:02:22 compute-0 systemd[1]: libpod-conmon-9ccd5adf3603a0389d59652a4881ae4c9bb0a5eeabb6552ebeafac17b6dfb385.scope: Deactivated successfully.
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.511505678 +0000 UTC m=+0.054359957 container create 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:02:22 compute-0 systemd[1]: Started libpod-conmon-38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52.scope.
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.482954175 +0000 UTC m=+0.025808465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a7ab7937f3dd9bf5c1907a0ccfcc4be59b98d4d256af577dda1aad2b6f5922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a7ab7937f3dd9bf5c1907a0ccfcc4be59b98d4d256af577dda1aad2b6f5922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a7ab7937f3dd9bf5c1907a0ccfcc4be59b98d4d256af577dda1aad2b6f5922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a7ab7937f3dd9bf5c1907a0ccfcc4be59b98d4d256af577dda1aad2b6f5922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.606513962 +0000 UTC m=+0.149368221 container init 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.613002589 +0000 UTC m=+0.155856828 container start 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.617565775 +0000 UTC m=+0.160420014 container attach 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 10:02:22 compute-0 sudo[229602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxyyoskauehmiothifaqadkguwozwhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090542.323388-830-118187797498174/AnsiballZ_file.py'
Oct 10 10:02:22 compute-0 sudo[229602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]: {
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:     "0": [
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:         {
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "devices": [
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "/dev/loop3"
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             ],
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "lv_name": "ceph_lv0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "lv_size": "21470642176",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "name": "ceph_lv0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "tags": {
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.cluster_name": "ceph",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.crush_device_class": "",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.encrypted": "0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.osd_id": "0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.type": "block",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.vdo": "0",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:                 "ceph.with_tpm": "0"
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             },
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "type": "block",
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:             "vg_name": "ceph_vg0"
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:         }
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]:     ]
Oct 10 10:02:22 compute-0 naughty_driscoll[229516]: }
Oct 10 10:02:22 compute-0 systemd[1]: libpod-38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52.scope: Deactivated successfully.
Oct 10 10:02:22 compute-0 podman[229477]: 2025-10-10 10:02:22.943757382 +0000 UTC m=+0.486611621 container died 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a7ab7937f3dd9bf5c1907a0ccfcc4be59b98d4d256af577dda1aad2b6f5922-merged.mount: Deactivated successfully.
Oct 10 10:02:23 compute-0 podman[229477]: 2025-10-10 10:02:23.003070296 +0000 UTC m=+0.545924535 container remove 38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_driscoll, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:02:23 compute-0 systemd[1]: libpod-conmon-38e8583eecf464c811411bdc1b5242ac9acea07a4764929fcb2806dd4ae6ce52.scope: Deactivated successfully.
Oct 10 10:02:23 compute-0 python3.9[229605]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:23 compute-0 sudo[229323]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:23 compute-0 sudo[229602]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:23 compute-0 sudo[229620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:02:23 compute-0 sudo[229620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:23 compute-0 sudo[229620]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:23 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:23 compute-0 sudo[229669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:02:23 compute-0 sudo[229669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:23 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:23 compute-0 sudo[229861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrslrsspkuhtieiccsoixyogergbdymw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090543.2870433-854-202770337315539/AnsiballZ_stat.py'
Oct 10 10:02:23 compute-0 sudo[229861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.653490386 +0000 UTC m=+0.051466814 container create 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:02:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:23 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:23 compute-0 systemd[1]: Started libpod-conmon-4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb.scope.
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.632873588 +0000 UTC m=+0.030850026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:23 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.751547268 +0000 UTC m=+0.149523736 container init 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.759721169 +0000 UTC m=+0.157697627 container start 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.764353917 +0000 UTC m=+0.162330385 container attach 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:02:23 compute-0 suspicious_carson[229879]: 167 167
Oct 10 10:02:23 compute-0 systemd[1]: libpod-4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb.scope: Deactivated successfully.
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.76822903 +0000 UTC m=+0.166205448 container died 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-daca319f7f9854ad1f20df5892999fbc46553ab4f836c0f89a9fe9ff769f98bb-merged.mount: Deactivated successfully.
Oct 10 10:02:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:23 compute-0 podman[229857]: 2025-10-10 10:02:23.805868243 +0000 UTC m=+0.203844661 container remove 4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:02:23 compute-0 systemd[1]: libpod-conmon-4bb7e8099e134551a073dde4563aff4adf45a2e37d7652305ccfa864793412cb.scope: Deactivated successfully.
Oct 10 10:02:23 compute-0 python3.9[229874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:23 compute-0 sudo[229861]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:23 compute-0 podman[229902]: 2025-10-10 10:02:23.985265132 +0000 UTC m=+0.048518111 container create 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:02:24 compute-0 systemd[1]: Started libpod-conmon-15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc.scope.
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:23.964680084 +0000 UTC m=+0.027933083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b5f082e68074522dc061c01c35ac79fad250c597028d0a998cfd97ffb7a910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b5f082e68074522dc061c01c35ac79fad250c597028d0a998cfd97ffb7a910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b5f082e68074522dc061c01c35ac79fad250c597028d0a998cfd97ffb7a910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b5f082e68074522dc061c01c35ac79fad250c597028d0a998cfd97ffb7a910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:24.090880084 +0000 UTC m=+0.154133093 container init 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:24.097185936 +0000 UTC m=+0.160438915 container start 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:24.101196614 +0000 UTC m=+0.164449593 container attach 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:02:24 compute-0 ceph-mon[73551]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:24.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:24 compute-0 sudo[230044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blihllhpgzqaryxrdscabjiydwmbtmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090543.2870433-854-202770337315539/AnsiballZ_copy.py'
Oct 10 10:02:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:24.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:24 compute-0 sudo[230044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:24 compute-0 python3.9[230046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090543.2870433-854-202770337315539/.source.json _original_basename=.lw13o3ca follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:24 compute-0 sudo[230044]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:24 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:02:24 compute-0 lvm[230168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:02:24 compute-0 lvm[230168]: VG ceph_vg0 finished
Oct 10 10:02:24 compute-0 gallant_buck[229959]: {}
Oct 10 10:02:24 compute-0 systemd[1]: libpod-15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc.scope: Deactivated successfully.
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:24.911000655 +0000 UTC m=+0.974253644 container died 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 10:02:24 compute-0 systemd[1]: libpod-15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc.scope: Consumed 1.263s CPU time.
Oct 10 10:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b5f082e68074522dc061c01c35ac79fad250c597028d0a998cfd97ffb7a910-merged.mount: Deactivated successfully.
Oct 10 10:02:24 compute-0 podman[229902]: 2025-10-10 10:02:24.959124742 +0000 UTC m=+1.022377731 container remove 15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:02:24 compute-0 systemd[1]: libpod-conmon-15d343855f04355fba254747f5aeb66d0cfadcad8de18fc116830554845eb8cc.scope: Deactivated successfully.
Oct 10 10:02:25 compute-0 sudo[229669]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:02:25 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:02:25 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:25 compute-0 sudo[230285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyupkyaqhcowbkwvohtbxcnytoyrpasq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090544.7827036-899-196001969972161/AnsiballZ_file.py'
Oct 10 10:02:25 compute-0 sudo[230285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:25 compute-0 sudo[230279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:02:25 compute-0 sudo[230279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:25 compute-0 sudo[230279]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:25 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:25 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:25 compute-0 python3.9[230306]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:25 compute-0 sudo[230285]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:25 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:26 compute-0 ceph-mon[73551]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:26 compute-0 sudo[230459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veeebtlfecyspcdjricmwonruzzhqqgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090545.7747447-923-203560988104436/AnsiballZ_stat.py'
Oct 10 10:02:26 compute-0 sudo[230459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:26.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:26 compute-0 sudo[230459]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:26 compute-0 sudo[230582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aynjiyywqkireyinqlscuprifcxqqdnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090545.7747447-923-203560988104436/AnsiballZ_copy.py'
Oct 10 10:02:26 compute-0 sudo[230582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:26 compute-0 sudo[230582]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:27.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:27.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:27.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:27] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:27] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:02:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:27 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:02:27 compute-0 sudo[230735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtxyhcsujidndvgvprovetcuntmorttk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090547.3311357-974-42302492114332/AnsiballZ_container_config_data.py'
Oct 10 10:02:27 compute-0 sudo[230735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:28 compute-0 python3.9[230737]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 10 10:02:28 compute-0 sudo[230735]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:28 compute-0 ceph-mon[73551]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:28.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:28 compute-0 sudo[230889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstutuyoivupxhxzhnbvzjzufqhxnrek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090548.3918304-1001-61986081721553/AnsiballZ_container_config_hash.py'
Oct 10 10:02:28 compute-0 sudo[230889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:29 compute-0 python3.9[230891]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:02:29 compute-0 sudo[230889]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:29 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:30 compute-0 sudo[231041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukjajroycgotefxfuuwthjoilxaadzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090549.5887933-1028-223230603092183/AnsiballZ_podman_container_info.py'
Oct 10 10:02:30 compute-0 sudo[231041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:30 compute-0 ceph-mon[73551]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:30.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:30 compute-0 python3.9[231043]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 10:02:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:30 compute-0 sudo[231041]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:30 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:02:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee0004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:02:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:31 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:32 compute-0 ceph-mon[73551]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:32 compute-0 sudo[231221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjjcofbkfmjstlspwbievxvdivwzpqrp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090551.5401042-1067-132955697960811/AnsiballZ_edpm_container_manage.py'
Oct 10 10:02:32 compute-0 sudo[231221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:32.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:32 compute-0 python3[231223]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:02:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:02:32 compute-0 podman[231259]: 2025-10-10 10:02:32.635202816 +0000 UTC m=+0.103665542 container create 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:02:32 compute-0 podman[231259]: 2025-10-10 10:02:32.55423134 +0000 UTC m=+0.022694046 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:32 compute-0 python3[231223]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:32 compute-0 sudo[231221]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:33 compute-0 ceph-mon[73551]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:02:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:33 compute-0 sudo[231449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nziiwipanxxzkmpiqflijlonkknqxyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090553.1641648-1091-26190617665813/AnsiballZ_stat.py'
Oct 10 10:02:33 compute-0 sudo[231449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:33 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:33 compute-0 python3.9[231451]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:33 compute-0 sudo[231449]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:34.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:34 compute-0 sudo[231604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buexlusxmelsrmytrmclahyzipvaxaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090554.1427288-1118-241479030371633/AnsiballZ_file.py'
Oct 10 10:02:34 compute-0 sudo[231604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:34 compute-0 sudo[231607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:34 compute-0 sudo[231607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:34 compute-0 sudo[231607]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:34 compute-0 python3.9[231606]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:34 compute-0 sudo[231604]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:34 compute-0 sudo[231706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivanoucoccdubfgxmdlpokotmbboqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090554.1427288-1118-241479030371633/AnsiballZ_stat.py'
Oct 10 10:02:34 compute-0 sudo[231706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:35 compute-0 python3.9[231708]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:35 compute-0 sudo[231706]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:35 compute-0 sudo[231857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqeiwslskwoubvproofpyiebdwktxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.1951373-1118-14668871481313/AnsiballZ_copy.py'
Oct 10 10:02:35 compute-0 sudo[231857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:35 compute-0 ceph-mon[73551]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:35 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:35 compute-0 python3.9[231859]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090555.1951373-1118-14668871481313/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:35 compute-0 sudo[231857]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:36 compute-0 sudo[231934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyvltynpuqorlrbiilahqwaijrblqand ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.1951373-1118-14668871481313/AnsiballZ_systemd.py'
Oct 10 10:02:36 compute-0 sudo[231934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100236 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:02:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:36.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:36 compute-0 python3.9[231936]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:02:36 compute-0 systemd[1]: Reloading.
Oct 10 10:02:36 compute-0 systemd-sysv-generator[231966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:36 compute-0 systemd-rc-local-generator[231961]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:36 compute-0 sudo[231934]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:37.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:37.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:37 compute-0 sudo[232046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitpmnpounvcyivajwygidtyvqolibzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.1951373-1118-14668871481313/AnsiballZ_systemd.py'
Oct 10 10:02:37 compute-0 sudo[232046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:02:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:02:37 compute-0 python3.9[232048]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:37 compute-0 systemd[1]: Reloading.
Oct 10 10:02:37 compute-0 ceph-mon[73551]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:37 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:37 compute-0 systemd-sysv-generator[232081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:37 compute-0 systemd-rc-local-generator[232075]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:38 compute-0 systemd[1]: Starting iscsid container...
Oct 10 10:02:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bfcf22afa1d137f490c46d9ac316d7e90446125f72ed0d2e96ee3d92ae53fe/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bfcf22afa1d137f490c46d9ac316d7e90446125f72ed0d2e96ee3d92ae53fe/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bfcf22afa1d137f490c46d9ac316d7e90446125f72ed0d2e96ee3d92ae53fe/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:38.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1.
Oct 10 10:02:38 compute-0 podman[232088]: 2025-10-10 10:02:38.196874007 +0000 UTC m=+0.134853057 container init 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 10:02:38 compute-0 iscsid[232105]: + sudo -E kolla_set_configs
Oct 10 10:02:38 compute-0 podman[232088]: 2025-10-10 10:02:38.226151072 +0000 UTC m=+0.164130142 container start 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:02:38 compute-0 podman[232088]: iscsid
Oct 10 10:02:38 compute-0 systemd[1]: Started iscsid container.
Oct 10 10:02:38 compute-0 sudo[232112]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:02:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 10 10:02:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:38 compute-0 sudo[232046]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 10:02:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 10:02:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 10 10:02:38 compute-0 systemd[232135]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 10 10:02:38 compute-0 podman[232111]: 2025-10-10 10:02:38.348600192 +0000 UTC m=+0.107620227 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:02:38 compute-0 systemd[1]: 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1-29a14b31d972a1ae.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:02:38 compute-0 systemd[1]: 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1-29a14b31d972a1ae.service: Failed with result 'exit-code'.
Oct 10 10:02:38 compute-0 systemd[232135]: Queued start job for default target Main User Target.
Oct 10 10:02:38 compute-0 systemd[232135]: Created slice User Application Slice.
Oct 10 10:02:38 compute-0 systemd[232135]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 10:02:38 compute-0 systemd[232135]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 10:02:38 compute-0 systemd[232135]: Reached target Paths.
Oct 10 10:02:38 compute-0 systemd[232135]: Reached target Timers.
Oct 10 10:02:38 compute-0 systemd[232135]: Starting D-Bus User Message Bus Socket...
Oct 10 10:02:38 compute-0 systemd[232135]: Starting Create User's Volatile Files and Directories...
Oct 10 10:02:38 compute-0 systemd[232135]: Finished Create User's Volatile Files and Directories.
Oct 10 10:02:38 compute-0 systemd[232135]: Listening on D-Bus User Message Bus Socket.
Oct 10 10:02:38 compute-0 systemd[232135]: Reached target Sockets.
Oct 10 10:02:38 compute-0 systemd[232135]: Reached target Basic System.
Oct 10 10:02:38 compute-0 systemd[232135]: Reached target Main User Target.
Oct 10 10:02:38 compute-0 systemd[232135]: Startup finished in 174ms.
Oct 10 10:02:38 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 10 10:02:38 compute-0 systemd[1]: Started Session c3 of User root.
Oct 10 10:02:38 compute-0 sudo[232112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:02:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:02:38 compute-0 iscsid[232105]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:02:38 compute-0 iscsid[232105]: INFO:__main__:Validating config file
Oct 10 10:02:38 compute-0 iscsid[232105]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:02:38 compute-0 iscsid[232105]: INFO:__main__:Writing out command to execute
Oct 10 10:02:38 compute-0 sudo[232112]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 10 10:02:38 compute-0 iscsid[232105]: ++ cat /run_command
Oct 10 10:02:38 compute-0 iscsid[232105]: + CMD='/usr/sbin/iscsid -f'
Oct 10 10:02:38 compute-0 iscsid[232105]: + ARGS=
Oct 10 10:02:38 compute-0 iscsid[232105]: + sudo kolla_copy_cacerts
Oct 10 10:02:38 compute-0 sudo[232228]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:02:38 compute-0 systemd[1]: Started Session c4 of User root.
Oct 10 10:02:38 compute-0 sudo[232228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:02:38 compute-0 sudo[232228]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 10 10:02:38 compute-0 iscsid[232105]: + [[ ! -n '' ]]
Oct 10 10:02:38 compute-0 iscsid[232105]: + . kolla_extend_start
Oct 10 10:02:38 compute-0 iscsid[232105]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 10 10:02:38 compute-0 iscsid[232105]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 10 10:02:38 compute-0 iscsid[232105]: + umask 0022
Oct 10 10:02:38 compute-0 iscsid[232105]: + exec /usr/sbin/iscsid -f
Oct 10 10:02:38 compute-0 iscsid[232105]: Running command: '/usr/sbin/iscsid -f'
Oct 10 10:02:38 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 10 10:02:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:38 compute-0 podman[232279]: 2025-10-10 10:02:38.879116374 +0000 UTC m=+0.105962045 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:02:38 compute-0 python3.9[232321]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc0018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:39 compute-0 sudo[232484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebxejztmehgdrszhfljsiyoedimwzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090559.3190258-1229-61838245738654/AnsiballZ_file.py'
Oct 10 10:02:39 compute-0 sudo[232484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:39 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:39 compute-0 ceph-mon[73551]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:02:39 compute-0 python3.9[232486]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:39 compute-0 sudo[232484]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:40.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:40 compute-0 sudo[232637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrnjiztpjedsevmvgrgrumntdjllwywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090560.4325523-1262-99050063233649/AnsiballZ_service_facts.py'
Oct 10 10:02:40 compute-0 sudo[232637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:40 compute-0 python3.9[232639]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:02:41 compute-0 network[232657]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:02:41 compute-0 network[232658]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:02:41 compute-0 network[232659]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:02:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:41 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc0018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:41 compute-0 ceph-mon[73551]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:02:41.884 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:02:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:02:41.886 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:02:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:02:41.887 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:02:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:42.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:43 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:43 compute-0 ceph-mon[73551]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:44 compute-0 sudo[232637]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc0018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc0018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:45 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:45 compute-0 ceph-mon[73551]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:46.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:46 compute-0 podman[232812]: 2025-10-10 10:02:46.22456183 +0000 UTC m=+0.070830903 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:02:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:46.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:02:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:02:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:46 compute-0 sudo[232957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxilrcbbiwzgkifimgsryahlxjpeoaxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090566.4202118-1292-9680851607917/AnsiballZ_file.py'
Oct 10 10:02:46 compute-0 sudo[232957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:46 compute-0 python3.9[232959]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:02:46 compute-0 sudo[232957]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:47.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:02:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:02:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:02:47 compute-0 sudo[233110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwzyimfiujgkoouqipshcillwfluojtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090567.1708145-1316-115062973008755/AnsiballZ_modprobe.py'
Oct 10 10:02:47 compute-0 sudo[233110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:47 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:47 compute-0 ceph-mon[73551]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:47 compute-0 python3.9[233112]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 10 10:02:47 compute-0 sudo[233110]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:48.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:48 compute-0 sudo[233267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhguzjwdgsolfifpmzontkjsxghivjco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090568.115384-1340-168139745179623/AnsiballZ_stat.py'
Oct 10 10:02:48 compute-0 sudo[233267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:48 compute-0 python3.9[233269]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:48 compute-0 sudo[233267]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:48 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 10 10:02:48 compute-0 systemd[232135]: Activating special unit Exit the Session...
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped target Main User Target.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped target Basic System.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped target Paths.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped target Sockets.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped target Timers.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 10:02:48 compute-0 systemd[232135]: Closed D-Bus User Message Bus Socket.
Oct 10 10:02:48 compute-0 systemd[232135]: Stopped Create User's Volatile Files and Directories.
Oct 10 10:02:48 compute-0 systemd[232135]: Removed slice User Application Slice.
Oct 10 10:02:48 compute-0 systemd[232135]: Reached target Shutdown.
Oct 10 10:02:48 compute-0 systemd[232135]: Finished Exit the Session.
Oct 10 10:02:48 compute-0 systemd[232135]: Reached target Exit the Session.
Oct 10 10:02:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 10:02:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 10 10:02:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 10:02:48 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 10:02:48 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 10:02:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 10:02:48 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 10:02:48 compute-0 sudo[233393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmzxetiybdnbpvttqhfykldahgfrhik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090568.115384-1340-168139745179623/AnsiballZ_copy.py'
Oct 10 10:02:48 compute-0 sudo[233393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:49 compute-0 python3.9[233395]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090568.115384-1340-168139745179623/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:49 compute-0 sudo[233393]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:49 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:49 compute-0 ceph-mon[73551]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:50 compute-0 sudo[233545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inohybentqxhmrhahlxdauetklqdsxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090569.733105-1388-110111790337984/AnsiballZ_lineinfile.py'
Oct 10 10:02:50 compute-0 sudo[233545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:50 compute-0 python3.9[233547]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:50 compute-0 sudo[233545]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:50 compute-0 sudo[233699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlmcslgmghnokagxmhjvdzggolsplzwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090570.5607486-1412-20915379496448/AnsiballZ_systemd.py'
Oct 10 10:02:50 compute-0 sudo[233699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:51 compute-0 python3.9[233701]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:02:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:51 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 10:02:51 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 10 10:02:51 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 10 10:02:51 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 10 10:02:51 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 10 10:02:51 compute-0 sudo[233699]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:51 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 10 10:02:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:51 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:51 compute-0 ceph-mon[73551]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:52 compute-0 sudo[233856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvscigbwljwlxqhwdisegbgeqqyepavt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090571.6813838-1436-210373209456279/AnsiballZ_file.py'
Oct 10 10:02:52 compute-0 sudo[233856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:52 compute-0 python3.9[233858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:52 compute-0 sudo[233856]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:02:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:02:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:52 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 10 10:02:52 compute-0 sudo[234011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqknhmsuyqjpkcvjhmtccpipenofnvcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090572.5942423-1463-71646002079836/AnsiballZ_stat.py'
Oct 10 10:02:52 compute-0 sudo[234011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:53 compute-0 python3.9[234013]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:53 compute-0 sudo[234011]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:53 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:53 compute-0 sudo[234163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzzzxtagnzlpejbewhdrfqvoxllcepje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090573.5302136-1490-50150912639408/AnsiballZ_stat.py'
Oct 10 10:02:53 compute-0 sudo[234163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:53 compute-0 ceph-mon[73551]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:54 compute-0 python3.9[234165]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:54 compute-0 sudo[234163]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:54.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:54 compute-0 sudo[234316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vadnozjzlcfbmnuwbrnphubfkphvsxpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090574.2888558-1514-238161322220362/AnsiballZ_stat.py'
Oct 10 10:02:54 compute-0 sudo[234316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:54 compute-0 sudo[234319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:54 compute-0 sudo[234319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:54 compute-0 sudo[234319]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:54 compute-0 python3.9[234318]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:54 compute-0 sudo[234316]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:55 compute-0 sudo[234465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpmjbanxzzezyvgccjhexkvbqtgpmqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090574.2888558-1514-238161322220362/AnsiballZ_copy.py'
Oct 10 10:02:55 compute-0 sudo[234465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:55 compute-0 python3.9[234467]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090574.2888558-1514-238161322220362/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:55 compute-0 sudo[234465]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:55 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:55 compute-0 ceph-mon[73551]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:56 compute-0 sudo[234618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfichjrykpygyluxqctmwriqnkpuukvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090575.7757921-1559-44195942105142/AnsiballZ_command.py'
Oct 10 10:02:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:02:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:02:56 compute-0 sudo[234618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:56 compute-0 python3.9[234620]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:02:56 compute-0 sudo[234618]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:56 compute-0 sudo[234772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpuexsxuzzplyoufawmhqpgrzxnklpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090576.6471856-1583-194734368853944/AnsiballZ_lineinfile.py'
Oct 10 10:02:56 compute-0 sudo[234772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:02:57.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:02:57 compute-0 python3.9[234774]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:57 compute-0 sudo[234772]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:02:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:02:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:02:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:57 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:57 compute-0 ceph-mon[73551]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:57 compute-0 sudo[234924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acpjqfufsyoopvvuykxzgdflwqputuxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090577.44083-1607-267963287920105/AnsiballZ_replace.py'
Oct 10 10:02:57 compute-0 sudo[234924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:58 compute-0 python3.9[234926]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:58 compute-0 sudo[234924]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.003000097s ======
Oct 10 10:02:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000097s
Oct 10 10:02:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:02:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:02:58 compute-0 sudo[235077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbfyvhdwcmstbpigoqdoiqkcljvblbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090578.3844707-1631-144554444480177/AnsiballZ_replace.py'
Oct 10 10:02:58 compute-0 sudo[235077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:58 compute-0 python3.9[235079]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:58 compute-0 sudo[235077]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:59 compute-0 sudo[235230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcvcfscgtebxfjvxygurrcwmyzyofkyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090579.2415504-1658-182129724343367/AnsiballZ_lineinfile.py'
Oct 10 10:02:59 compute-0 sudo[235230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:02:59 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:59 compute-0 python3.9[235232]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:59 compute-0 sudo[235230]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:59 compute-0 ceph-mon[73551]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:00 compute-0 sudo[235383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbqazwklzgenxziosvxzlfvildckftp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090579.93919-1658-190617180753087/AnsiballZ_lineinfile.py'
Oct 10 10:03:00 compute-0 sudo[235383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:03:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:00.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:03:00 compute-0 python3.9[235385]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:00 compute-0 sudo[235383]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:00 compute-0 sudo[235536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxunzxhyvqldoxgdlfnrbykvswwvwzfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090580.595174-1658-64276438840575/AnsiballZ_lineinfile.py'
Oct 10 10:03:00 compute-0 sudo[235536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:01 compute-0 python3.9[235538]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:01 compute-0 sudo[235536]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:03:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:03:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ee4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:03:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:01 compute-0 sudo[235688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwhtpzjcmhlfdthchyoamohgfludaxbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090581.2845008-1658-152956266310689/AnsiballZ_lineinfile.py'
Oct 10 10:03:01 compute-0 sudo[235688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:03:01 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ecc001a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:01 compute-0 python3.9[235690]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:01 compute-0 sudo[235688]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:01 compute-0 ceph-mon[73551]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000071s ======
Oct 10 10:03:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000071s
Oct 10 10:03:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:02.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:02 compute-0 sudo[235841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujecgvtcbbjqtsesdnofsbcjxitebux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090582.255637-1745-103023139508532/AnsiballZ_stat.py'
Oct 10 10:03:02 compute-0 sudo[235841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:03:02 compute-0 python3.9[235843]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:02 compute-0 sudo[235841]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:03:03 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ef000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[219446]: 10/10/2025 10:03:03 : epoch 68e8d96b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ec8003d10 fd 38 proxy ignored for local
Oct 10 10:03:03 compute-0 kernel: ganesha.nfsd[222140]: segfault at 50 ip 00007f1f9e7c932e sp 00007f1f56ffc210 error 4 in libntirpc.so.5.8[7f1f9e7ae000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 10 10:03:03 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:03:03 compute-0 systemd[1]: Started Process Core Dump (PID 235964/UID 0).
Oct 10 10:03:03 compute-0 sudo[235998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakobnmxklrjowyhgyftewomijzuqaeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090583.0595787-1769-7729347895220/AnsiballZ_file.py'
Oct 10 10:03:03 compute-0 sudo[235998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:03 compute-0 python3.9[236000]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:03 compute-0 sudo[235998]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:03 compute-0 ceph-mon[73551]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:03:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:04 compute-0 sudo[236151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wossphbdjacwltmcmhhwdyywvqsnakqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.002551-1796-263083768472122/AnsiballZ_file.py'
Oct 10 10:03:04 compute-0 sudo[236151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:04.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:04 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:03:04 compute-0 python3.9[236153]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:04 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 10 10:03:04 compute-0 systemd-coredump[235971]: Process 219452 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f1f9e7c932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:03:04 compute-0 sudo[236151]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:04 compute-0 systemd[1]: systemd-coredump@4-235964-0.service: Deactivated successfully.
Oct 10 10:03:04 compute-0 systemd[1]: systemd-coredump@4-235964-0.service: Consumed 1.278s CPU time.
Oct 10 10:03:04 compute-0 podman[236184]: 2025-10-10 10:03:04.717193312 +0000 UTC m=+0.031914267 container died 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-20830adcf8f33eddaf935d96ad9d00cc424a7a0315714589237ad69b8d548a22-merged.mount: Deactivated successfully.
Oct 10 10:03:04 compute-0 podman[236184]: 2025-10-10 10:03:04.767673133 +0000 UTC m=+0.082394078 container remove 4848832baa41992e15e603a4d95d6bd9b25d8ba0f353b9fb27bb9bcd0ef0434e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:03:04 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:03:04 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:03:04 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.741s CPU time.
Oct 10 10:03:05 compute-0 sudo[236354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jizyeeinpgjcvrhldlhfzxagrevcpakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.8305938-1820-108632421666242/AnsiballZ_stat.py'
Oct 10 10:03:05 compute-0 sudo[236354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:05 compute-0 python3.9[236356]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:05 compute-0 sudo[236354]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:05 compute-0 sudo[236432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cssxfdhqcnushgopytngslzsdkwwgwnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.8305938-1820-108632421666242/AnsiballZ_file.py'
Oct 10 10:03:05 compute-0 sudo[236432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:05 compute-0 python3.9[236434]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:05 compute-0 sudo[236432]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:05 compute-0 ceph-mon[73551]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:06.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:06 compute-0 sudo[236585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovsmldmgdczqkepdsrlkordfwlqhqfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090586.0613182-1820-181805880306155/AnsiballZ_stat.py'
Oct 10 10:03:06 compute-0 sudo[236585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:06 compute-0 python3.9[236587]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:06 compute-0 sudo[236585]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:06 compute-0 sudo[236664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvdkrpllpyoemorfwgrklowpvpjwvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090586.0613182-1820-181805880306155/AnsiballZ_file.py'
Oct 10 10:03:06 compute-0 sudo[236664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:07.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:03:07 compute-0 python3.9[236666]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:07 compute-0 sudo[236664]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:03:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:03:07 compute-0 sudo[236816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiaktbjqyzzluuvsvvcghnuihtynatyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090587.4502132-1889-188921241028243/AnsiballZ_file.py'
Oct 10 10:03:07 compute-0 sudo[236816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:07 compute-0 ceph-mon[73551]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:08 compute-0 python3.9[236818]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:08 compute-0 sudo[236816]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:08 compute-0 sudo[236982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxpusqdkqlxtzxsyquoivxkpmffjanub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090588.2702155-1913-212743440690083/AnsiballZ_stat.py'
Oct 10 10:03:08 compute-0 sudo[236982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:08 compute-0 podman[236943]: 2025-10-10 10:03:08.628903537 +0000 UTC m=+0.083285029 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 10 10:03:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:08 compute-0 python3.9[236989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:08 compute-0 sudo[236982]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:09 compute-0 sudo[237083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imtzzakzgodmosjjbtvpmounotxcwpgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090588.2702155-1913-212743440690083/AnsiballZ_file.py'
Oct 10 10:03:09 compute-0 sudo[237083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:09 compute-0 podman[237040]: 2025-10-10 10:03:09.164518193 +0000 UTC m=+0.098593370 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:03:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100309 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:03:09 compute-0 python3.9[237088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:09 compute-0 sudo[237083]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:09 compute-0 sudo[237244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvjcuewokbmmlmmrkrmxaosxxgwjqfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090589.607126-1949-146292519149724/AnsiballZ_stat.py'
Oct 10 10:03:09 compute-0 sudo[237244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:10 compute-0 ceph-mon[73551]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:10 compute-0 python3.9[237246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:10 compute-0 sudo[237244]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:10.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:10 compute-0 sudo[237323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzepmzbaacouhwmvneclolckckuknlwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090589.607126-1949-146292519149724/AnsiballZ_file.py'
Oct 10 10:03:10 compute-0 sudo[237323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:10 compute-0 python3.9[237325]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:10 compute-0 sudo[237323]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:11 compute-0 sudo[237476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctebyseldlmeisxxlpojwewoofumcetu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090590.945787-1985-64959488485031/AnsiballZ_systemd.py'
Oct 10 10:03:11 compute-0 sudo[237476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:11 compute-0 python3.9[237478]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:11 compute-0 systemd[1]: Reloading.
Oct 10 10:03:11 compute-0 systemd-rc-local-generator[237505]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:11 compute-0 systemd-sysv-generator[237508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:12 compute-0 ceph-mon[73551]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:12 compute-0 sudo[237476]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:12.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:12 compute-0 sudo[237665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tarrqyidbnpdeytirhazlqggevrctzsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090592.2460299-2009-276720202600692/AnsiballZ_stat.py'
Oct 10 10:03:12 compute-0 sudo[237665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:12 compute-0 python3.9[237667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:12 compute-0 sudo[237665]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:13 compute-0 sudo[237744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-somoryrpiftuwntgsxijhcqqicaftyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090592.2460299-2009-276720202600692/AnsiballZ_file.py'
Oct 10 10:03:13 compute-0 sudo[237744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:13 compute-0 python3.9[237746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:13 compute-0 sudo[237744]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.830084) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593830168, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1120, "num_deletes": 254, "total_data_size": 1987668, "memory_usage": 2025184, "flush_reason": "Manual Compaction"}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593857411, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1967335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17981, "largest_seqno": 19100, "table_properties": {"data_size": 1962011, "index_size": 2784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10656, "raw_average_key_size": 18, "raw_value_size": 1951470, "raw_average_value_size": 3387, "num_data_blocks": 125, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090484, "oldest_key_time": 1760090484, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 27385 microseconds, and 6041 cpu microseconds.
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.857478) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1967335 bytes OK
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.857507) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.861340) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.861382) EVENT_LOG_v1 {"time_micros": 1760090593861371, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.861411) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1982694, prev total WAL file size 1982694, number of live WAL files 2.
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.862384) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1921KB)], [38(11MB)]
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593862425, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14184317, "oldest_snapshot_seqno": -1}
Oct 10 10:03:13 compute-0 sudo[237896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfpsujtvexhgopqwkawluvgwqbxksamg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090593.6252272-2045-232511488810827/AnsiballZ_stat.py'
Oct 10 10:03:13 compute-0 sudo[237896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5002 keys, 13702709 bytes, temperature: kUnknown
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593995184, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13702709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13667744, "index_size": 21351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126896, "raw_average_key_size": 25, "raw_value_size": 13575581, "raw_average_value_size": 2714, "num_data_blocks": 878, "num_entries": 5002, "num_filter_entries": 5002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.995480) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13702709 bytes
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.996947) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.8 rd, 103.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.7 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(14.2) write-amplify(7.0) OK, records in: 5524, records dropped: 522 output_compression: NoCompression
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.996968) EVENT_LOG_v1 {"time_micros": 1760090593996958, "job": 18, "event": "compaction_finished", "compaction_time_micros": 132846, "compaction_time_cpu_micros": 33909, "output_level": 6, "num_output_files": 1, "total_output_size": 13702709, "num_input_records": 5524, "num_output_records": 5002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593997430, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593999500, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.862251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.999640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.999649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.999651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.999653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:03:13.999655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:14 compute-0 ceph-mon[73551]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:14 compute-0 python3.9[237898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:14.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:14 compute-0 sudo[237896]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:14 compute-0 sudo[237975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqddctarnzswegilbumwyipyfsveecgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090593.6252272-2045-232511488810827/AnsiballZ_file.py'
Oct 10 10:03:14 compute-0 sudo[237975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:14 compute-0 python3.9[237977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:14 compute-0 sudo[237975]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:14 compute-0 sudo[237978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:14 compute-0 sudo[237978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:14 compute-0 sudo[237978]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:15 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 5.
Oct 10 10:03:15 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:03:15 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.741s CPU time.
Oct 10 10:03:15 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:03:15 compute-0 sudo[238172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hacvqqwcbnsyxyazovrcvdxhuacdzgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090594.9456007-2081-38080493676904/AnsiballZ_systemd.py'
Oct 10 10:03:15 compute-0 sudo[238172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:15 compute-0 podman[238203]: 2025-10-10 10:03:15.407250701 +0000 UTC m=+0.065092559 container create b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:15 compute-0 podman[238203]: 2025-10-10 10:03:15.382451821 +0000 UTC m=+0.040293699 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f48d6014d3e884f029ad7b31eaf1237bb0a0a7d513529eeee5469a7e436893/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f48d6014d3e884f029ad7b31eaf1237bb0a0a7d513529eeee5469a7e436893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f48d6014d3e884f029ad7b31eaf1237bb0a0a7d513529eeee5469a7e436893/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f48d6014d3e884f029ad7b31eaf1237bb0a0a7d513529eeee5469a7e436893/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:15 compute-0 podman[238203]: 2025-10-10 10:03:15.504502864 +0000 UTC m=+0.162344742 container init b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:15 compute-0 podman[238203]: 2025-10-10 10:03:15.511093252 +0000 UTC m=+0.168935110 container start b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 10:03:15 compute-0 bash[238203]: b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:03:15 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:03:15 compute-0 python3.9[238186]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:15 compute-0 systemd[1]: Reloading.
Oct 10 10:03:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:03:15 compute-0 systemd-rc-local-generator[238283]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:15 compute-0 systemd-sysv-generator[238286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:16 compute-0 ceph-mon[73551]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:16 compute-0 systemd[1]: Starting Create netns directory...
Oct 10 10:03:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 10:03:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 10:03:16 compute-0 systemd[1]: Finished Create netns directory.
Oct 10 10:03:16 compute-0 sudo[238172]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:03:16
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', '.mgr', '.nfs', '.rgw.root', 'default.rgw.meta']
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:03:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:03:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:03:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:16 compute-0 sudo[238471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmzyfihukutuxhzenokbyqbjhrsiscuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090596.499918-2111-169874186700765/AnsiballZ_file.py'
Oct 10 10:03:16 compute-0 podman[238426]: 2025-10-10 10:03:16.854520844 +0000 UTC m=+0.066507067 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:03:16 compute-0 sudo[238471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:17.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:03:17 compute-0 python3.9[238475]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:17 compute-0 sudo[238471]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:03:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:03:17 compute-0 sudo[238626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hheizffqmpxwhmpbojszcclqitaidqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090597.3887403-2135-6028204982309/AnsiballZ_stat.py'
Oct 10 10:03:17 compute-0 sudo[238626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:17 compute-0 python3.9[238628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:17 compute-0 sudo[238626]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:18 compute-0 ceph-mon[73551]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:18.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:03:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4216 writes, 19K keys, 4216 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4216 writes, 4216 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1464 writes, 5944 keys, 1464 commit groups, 1.0 writes per commit group, ingest: 10.90 MB, 0.02 MB/s
                                           Interval WAL: 1464 writes, 1464 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.4      0.24              0.10         9    0.027       0      0       0.0       0.0
                                             L6      1/0   13.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    152.5    129.3      0.77              0.31         8    0.096     38K   4336       0.0       0.0
                                            Sum      1/0   13.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    115.9    128.8      1.01              0.41        17    0.060     38K   4336       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    103.2    105.2      0.47              0.14         6    0.079     17K   2046       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    152.5    129.3      0.77              0.31         8    0.096     38K   4336       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.0      0.24              0.10         8    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.11 GB read, 0.10 MB/s read, 1.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b2d7d9350#2 capacity: 304.00 MB usage: 6.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(359,5.81 MB,1.91066%) FilterBlock(18,116.11 KB,0.0372987%) IndexBlock(18,218.80 KB,0.0702858%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:03:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:18 compute-0 sudo[238750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsleczmghnvuicdvnsmqfpecawzulcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090597.3887403-2135-6028204982309/AnsiballZ_copy.py'
Oct 10 10:03:18 compute-0 sudo[238750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:18 compute-0 python3.9[238752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090597.3887403-2135-6028204982309/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:18 compute-0 sudo[238750]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:19 compute-0 sudo[238903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzkonnxnevmiitldmampunnpxanrdzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090599.1865277-2186-214295648189428/AnsiballZ_file.py'
Oct 10 10:03:19 compute-0 sudo[238903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:19 compute-0 python3.9[238905]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:19 compute-0 sudo[238903]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:20 compute-0 ceph-mon[73551]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:20 compute-0 sudo[239056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsjcttyloklowehzrbfirhoynharknvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090600.0773723-2210-253263380610063/AnsiballZ_stat.py'
Oct 10 10:03:20 compute-0 sudo[239056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:20 compute-0 python3.9[239058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:20 compute-0 sudo[239056]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:20 compute-0 sudo[239180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfeqjnywnzorhwjktutuwejrnomwmplr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090600.0773723-2210-253263380610063/AnsiballZ_copy.py'
Oct 10 10:03:20 compute-0 sudo[239180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:21 compute-0 python3.9[239182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090600.0773723-2210-253263380610063/.source.json _original_basename=.nu8cfi1b follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:21 compute-0 sudo[239180]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:21 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:03:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:21 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:03:21 compute-0 sudo[239332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irwuzvfvooljqnuzhnsrfqykwmbnutgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090601.5161872-2255-110676965794463/AnsiballZ_file.py'
Oct 10 10:03:21 compute-0 sudo[239332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:22 compute-0 ceph-mon[73551]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:22 compute-0 python3.9[239334]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:22 compute-0 sudo[239332]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:22.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:22.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:22 compute-0 sudo[239486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gctefoifkwbsbytalfoofybvljgnqirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090602.498176-2279-261786068375460/AnsiballZ_stat.py'
Oct 10 10:03:22 compute-0 sudo[239486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:23 compute-0 sudo[239486]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:23 compute-0 ceph-mon[73551]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:23 compute-0 sudo[239609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvgwvfjjvkpdbdwyntihwjussjtsaanm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090602.498176-2279-261786068375460/AnsiballZ_copy.py'
Oct 10 10:03:23 compute-0 sudo[239609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:23 compute-0 sudo[239609]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:24.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:24 compute-0 sudo[239762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dusqtdoulpqzaoooyknllaqacutvpftt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090604.124108-2330-202008154098776/AnsiballZ_container_config_data.py'
Oct 10 10:03:24 compute-0 sudo[239762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:24 compute-0 python3.9[239764]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 10 10:03:24 compute-0 sudo[239762]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:25 compute-0 sudo[239915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucgtwwiekttkfqbnjghewvboclbfeiqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090605.0629249-2357-66523237908587/AnsiballZ_container_config_hash.py'
Oct 10 10:03:25 compute-0 sudo[239915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:25 compute-0 sudo[239918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:03:25 compute-0 sudo[239918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:25 compute-0 sudo[239918]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:25 compute-0 sudo[239943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:03:25 compute-0 sudo[239943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:25 compute-0 python3.9[239917]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:03:25 compute-0 sudo[239915]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:25 compute-0 ceph-mon[73551]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:26 compute-0 sudo[239943]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:03:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100326 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:03:26 compute-0 sudo[240150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrmkqgdcsflqktntbroszjfesklfvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090606.020162-2384-16614169326732/AnsiballZ_podman_container_info.py'
Oct 10 10:03:26 compute-0 sudo[240150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:03:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-0 sudo[240153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:03:26 compute-0 sudo[240153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:26 compute-0 sudo[240153]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:26 compute-0 sudo[240178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:03:26 compute-0 sudo[240178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:26 compute-0 python3.9[240152]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 10:03:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:03:26 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-0 sudo[240150]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.012296502 +0000 UTC m=+0.068207166 container create 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:26.970923877 +0000 UTC m=+0.026834541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:27.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:27.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:27.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:03:27 compute-0 systemd[1]: Started libpod-conmon-8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036.scope.
Oct 10 10:03:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.175623126 +0000 UTC m=+0.231533810 container init 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.189991085 +0000 UTC m=+0.245901789 container start 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:03:27 compute-0 trusting_stonebraker[240312]: 167 167
Oct 10 10:03:27 compute-0 systemd[1]: libpod-8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036.scope: Deactivated successfully.
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.238706995 +0000 UTC m=+0.294617679 container attach 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.240974533 +0000 UTC m=+0.296885207 container died 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:03:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ded42818f45a5b52c5f15e548ae328c2d69daad835320f910a190aa9c69d0e50-merged.mount: Deactivated successfully.
Oct 10 10:03:27 compute-0 podman[240296]: 2025-10-10 10:03:27.598614426 +0000 UTC m=+0.654525090 container remove 8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:27 compute-0 systemd[1]: libpod-conmon-8f84cdaf45f7ec96c46fc78577a7a238a4dc05f1f270486b413b249c9023d036.scope: Deactivated successfully.
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:03:27 compute-0 ceph-mon[73551]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:27 compute-0 podman[240350]: 2025-10-10 10:03:27.760952216 +0000 UTC m=+0.043257320 container create 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:03:27 compute-0 systemd[1]: Started libpod-conmon-5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009.scope.
Oct 10 10:03:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:27 compute-0 podman[240350]: 2025-10-10 10:03:27.741949088 +0000 UTC m=+0.024254212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:27 compute-0 podman[240350]: 2025-10-10 10:03:27.84727075 +0000 UTC m=+0.129575874 container init 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:03:27 compute-0 podman[240350]: 2025-10-10 10:03:27.856150108 +0000 UTC m=+0.138455212 container start 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:03:27 compute-0 podman[240350]: 2025-10-10 10:03:27.859454123 +0000 UTC m=+0.141759247 container attach 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:03:28 compute-0 laughing_gagarin[240370]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:03:28 compute-0 laughing_gagarin[240370]: --> All data devices are unavailable
Oct 10 10:03:28 compute-0 systemd[1]: libpod-5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009.scope: Deactivated successfully.
Oct 10 10:03:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:28.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:28 compute-0 podman[240461]: 2025-10-10 10:03:28.295001098 +0000 UTC m=+0.033311596 container died 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 10:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9511a93fc90941fe37aacb2a4643a8eec7d7ba16997bc07af6a2759a591866b-merged.mount: Deactivated successfully.
Oct 10 10:03:28 compute-0 podman[240461]: 2025-10-10 10:03:28.34292207 +0000 UTC m=+0.081232548 container remove 5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:03:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:28.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:28 compute-0 systemd[1]: libpod-conmon-5a6cf4fcb4a546d00d2d021c411e1a80db8627608e8b7e5c280d722b15db5009.scope: Deactivated successfully.
Oct 10 10:03:28 compute-0 sudo[240526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjxgsozniaqajuumytlxtuabledlxha ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090608.0888205-2423-19577782332843/AnsiballZ_edpm_container_manage.py'
Oct 10 10:03:28 compute-0 sudo[240526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:28 compute-0 sudo[240178]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:28 compute-0 sudo[240529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:03:28 compute-0 sudo[240529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:28 compute-0 sudo[240529]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:28 compute-0 sudo[240554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:03:28 compute-0 sudo[240554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:28 compute-0 python3[240528]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:03:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:28 compute-0 podman[240645]: 2025-10-10 10:03:28.958789879 +0000 UTC m=+0.046273785 container create 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 10:03:29 compute-0 systemd[1]: Started libpod-conmon-737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a.scope.
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:28.941226111 +0000 UTC m=+0.028710047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:29.062303499 +0000 UTC m=+0.149787495 container init 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:29.071330812 +0000 UTC m=+0.158814718 container start 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:29.074623967 +0000 UTC m=+0.162107943 container attach 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 10:03:29 compute-0 bold_gates[240662]: 167 167
Oct 10 10:03:29 compute-0 systemd[1]: libpod-737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a.scope: Deactivated successfully.
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:29.078519752 +0000 UTC m=+0.166003658 container died 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dabc524c9653d1a7c02f685817628866e14115247917efdb75c99b6068ad22ff-merged.mount: Deactivated successfully.
Oct 10 10:03:29 compute-0 podman[240645]: 2025-10-10 10:03:29.11737011 +0000 UTC m=+0.204854036 container remove 737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:03:29 compute-0 systemd[1]: libpod-conmon-737a0a10671a84eb2f415860089280720ea4a76caae79f1e974e8a31833a397a.scope: Deactivated successfully.
Oct 10 10:03:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:29 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:29 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9cc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.294671088 +0000 UTC m=+0.053995263 container create c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:03:29 compute-0 systemd[1]: Started libpod-conmon-c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82.scope.
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.265925562 +0000 UTC m=+0.025249827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2deb17279cadb5017a7eb120f23df13f34622f93504f599f75db471a32c15cfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2deb17279cadb5017a7eb120f23df13f34622f93504f599f75db471a32c15cfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2deb17279cadb5017a7eb120f23df13f34622f93504f599f75db471a32c15cfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2deb17279cadb5017a7eb120f23df13f34622f93504f599f75db471a32c15cfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.390489011 +0000 UTC m=+0.149813226 container init c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.398462288 +0000 UTC m=+0.157786463 container start c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.401937218 +0000 UTC m=+0.161261413 container attach c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 10:03:29 compute-0 jovial_lalande[240707]: {
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:     "0": [
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:         {
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "devices": [
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "/dev/loop3"
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             ],
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "lv_name": "ceph_lv0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "lv_size": "21470642176",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "name": "ceph_lv0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "tags": {
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.cluster_name": "ceph",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.crush_device_class": "",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.encrypted": "0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.osd_id": "0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.type": "block",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.vdo": "0",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:                 "ceph.with_tpm": "0"
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             },
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "type": "block",
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:             "vg_name": "ceph_vg0"
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:         }
Oct 10 10:03:29 compute-0 jovial_lalande[240707]:     ]
Oct 10 10:03:29 compute-0 jovial_lalande[240707]: }
Oct 10 10:03:29 compute-0 ceph-mon[73551]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:29 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:29 compute-0 systemd[1]: libpod-c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82.scope: Deactivated successfully.
Oct 10 10:03:29 compute-0 podman[240688]: 2025-10-10 10:03:29.77309745 +0000 UTC m=+0.532421625 container died c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2deb17279cadb5017a7eb120f23df13f34622f93504f599f75db471a32c15cfd-merged.mount: Deactivated successfully.
Oct 10 10:03:30 compute-0 podman[240688]: 2025-10-10 10:03:30.143676293 +0000 UTC m=+0.903000478 container remove c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:03:30 compute-0 podman[240602]: 2025-10-10 10:03:30.143981714 +0000 UTC m=+1.369783478 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-0 systemd[1]: libpod-conmon-c0cd1e27d5652e2dbca1dc0829c5a41f435389e2a4626f72604bf14290a3bf82.scope: Deactivated successfully.
Oct 10 10:03:30 compute-0 sudo[240554]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:30 compute-0 sudo[240762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:03:30 compute-0 sudo[240762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:30 compute-0 sudo[240762]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:30.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:30 compute-0 podman[240795]: 2025-10-10 10:03:30.305249226 +0000 UTC m=+0.050498222 container create 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd)
Oct 10 10:03:30 compute-0 podman[240795]: 2025-10-10 10:03:30.280394195 +0000 UTC m=+0.025643211 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-0 python3[240528]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-0 sudo[240807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:03:30 compute-0 sudo[240807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:30.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:30 compute-0 sudo[240526]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.729008083 +0000 UTC m=+0.047522789 container create 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:30 compute-0 systemd[1]: Started libpod-conmon-96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa.scope.
Oct 10 10:03:30 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.708469481 +0000 UTC m=+0.026984197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.821541923 +0000 UTC m=+0.140056629 container init 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.828184324 +0000 UTC m=+0.146699010 container start 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.831478207 +0000 UTC m=+0.149992953 container attach 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:03:30 compute-0 serene_leavitt[240994]: 167 167
Oct 10 10:03:30 compute-0 systemd[1]: libpod-96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa.scope: Deactivated successfully.
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.834789242 +0000 UTC m=+0.153303928 container died 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b13ef43073963d504789a510589153cf8b043d7f5bbbcaa3ad5044631e86fbe3-merged.mount: Deactivated successfully.
Oct 10 10:03:30 compute-0 podman[240941]: 2025-10-10 10:03:30.875579977 +0000 UTC m=+0.194094673 container remove 96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Oct 10 10:03:30 compute-0 systemd[1]: libpod-conmon-96fb1de5f3a7411ff72ad941ae5de5a5539d94c469a5d682758aebc27e17a5fa.scope: Deactivated successfully.
Oct 10 10:03:30 compute-0 sudo[241083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvfnbsuragwnmiwicjdgbapmkoqssrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090610.6863708-2447-47901691615770/AnsiballZ_stat.py'
Oct 10 10:03:30 compute-0 sudo[241083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.079961785 +0000 UTC m=+0.053425574 container create 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:03:31 compute-0 systemd[1]: Started libpod-conmon-9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a.scope.
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.058802431 +0000 UTC m=+0.032266250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49c1e9308ed6f46a97b367d007cfda5242ee5669f6a06475b59ed67498cd244/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49c1e9308ed6f46a97b367d007cfda5242ee5669f6a06475b59ed67498cd244/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49c1e9308ed6f46a97b367d007cfda5242ee5669f6a06475b59ed67498cd244/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49c1e9308ed6f46a97b367d007cfda5242ee5669f6a06475b59ed67498cd244/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.183381162 +0000 UTC m=+0.156845041 container init 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.195642247 +0000 UTC m=+0.169106036 container start 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:03:31 compute-0 python3.9[241087]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.21130213 +0000 UTC m=+0.184765919 container attach 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:03:31 compute-0 sudo[241083]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:31 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100331 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:03:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:31 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:03:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:31 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:31 compute-0 ceph-mon[73551]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:31 compute-0 lvm[241302]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:03:31 compute-0 lvm[241302]: VG ceph_vg0 finished
Oct 10 10:03:31 compute-0 quirky_cartwright[241107]: {}
Oct 10 10:03:31 compute-0 sudo[241336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdnhmljdctxgemzelrsgqwnovbaieti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090611.6720676-2474-148114034490059/AnsiballZ_file.py'
Oct 10 10:03:31 compute-0 sudo[241336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:31 compute-0 systemd[1]: libpod-9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a.scope: Deactivated successfully.
Oct 10 10:03:31 compute-0 systemd[1]: libpod-9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a.scope: Consumed 1.281s CPU time.
Oct 10 10:03:31 compute-0 podman[241091]: 2025-10-10 10:03:31.974653214 +0000 UTC m=+0.948117013 container died 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e49c1e9308ed6f46a97b367d007cfda5242ee5669f6a06475b59ed67498cd244-merged.mount: Deactivated successfully.
Oct 10 10:03:32 compute-0 podman[241091]: 2025-10-10 10:03:32.020351639 +0000 UTC m=+0.993815428 container remove 9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:03:32 compute-0 systemd[1]: libpod-conmon-9f1bef8d181f7ca83098014d962d41a8fd97b10b7a81d82db22a7e83598b9e1a.scope: Deactivated successfully.
Oct 10 10:03:32 compute-0 sudo[240807]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:03:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:03:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:32 compute-0 sudo[241351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:03:32 compute-0 sudo[241351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:32 compute-0 sudo[241351]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:32 compute-0 python3.9[241338]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:32 compute-0 sudo[241336]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.003000105s ======
Oct 10 10:03:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:32.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000105s
Oct 10 10:03:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:32.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:32 compute-0 sudo[241449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwcntdmmdgahkblfjpebkwmeyyofdbnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090611.6720676-2474-148114034490059/AnsiballZ_stat.py'
Oct 10 10:03:32 compute-0 sudo[241449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:03:32 compute-0 python3.9[241451]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:32 compute-0 sudo[241449]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:33 compute-0 sudo[241601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmyuilkihjgsrsmrqfsyittznzzixovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.7820654-2474-235340222722624/AnsiballZ_copy.py'
Oct 10 10:03:33 compute-0 sudo[241601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:33 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:33 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9cc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:33 compute-0 python3.9[241603]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090612.7820654-2474-235340222722624/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:33 compute-0 sudo[241601]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:33 compute-0 sudo[241677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifoehvsqsadejgtqawvayjxumvykolzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.7820654-2474-235340222722624/AnsiballZ_systemd.py'
Oct 10 10:03:33 compute-0 sudo[241677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:33 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:34 compute-0 python3.9[241679]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:03:34 compute-0 systemd[1]: Reloading.
Oct 10 10:03:34 compute-0 ceph-mon[73551]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:03:34 compute-0 systemd-rc-local-generator[241708]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:34 compute-0 systemd-sysv-generator[241714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:34.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:34 compute-0 sudo[241677]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:34 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:03:34 compute-0 sudo[241790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybbxoiymmuqxowwjlisbmpnmyvapovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.7820654-2474-235340222722624/AnsiballZ_systemd.py'
Oct 10 10:03:34 compute-0 sudo[241790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:34 compute-0 sudo[241794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:34 compute-0 sudo[241794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:34 compute-0 sudo[241794]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:34 compute-0 python3.9[241792]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:35 compute-0 systemd[1]: Reloading.
Oct 10 10:03:35 compute-0 ceph-mon[73551]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:35 compute-0 systemd-rc-local-generator[241849]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:35 compute-0 systemd-sysv-generator[241852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:35 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:35 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:35 compute-0 systemd[1]: Starting multipathd container...
Oct 10 10:03:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619a596bfa732782378cb2133f30441ef3c167c2812fe426be05ddef3c5b465/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619a596bfa732782378cb2133f30441ef3c167c2812fe426be05ddef3c5b465/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013.
Oct 10 10:03:35 compute-0 podman[241859]: 2025-10-10 10:03:35.590238349 +0000 UTC m=+0.124770299 container init 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 10:03:35 compute-0 multipathd[241875]: + sudo -E kolla_set_configs
Oct 10 10:03:35 compute-0 podman[241859]: 2025-10-10 10:03:35.624174505 +0000 UTC m=+0.158706435 container start 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 10 10:03:35 compute-0 sudo[241881]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:03:35 compute-0 podman[241859]: multipathd
Oct 10 10:03:35 compute-0 sudo[241881]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:35 compute-0 sudo[241881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:35 compute-0 systemd[1]: Started multipathd container.
Oct 10 10:03:35 compute-0 sudo[241790]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-0 multipathd[241875]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:03:35 compute-0 multipathd[241875]: INFO:__main__:Validating config file
Oct 10 10:03:35 compute-0 multipathd[241875]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:03:35 compute-0 multipathd[241875]: INFO:__main__:Writing out command to execute
Oct 10 10:03:35 compute-0 sudo[241881]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-0 multipathd[241875]: ++ cat /run_command
Oct 10 10:03:35 compute-0 multipathd[241875]: + CMD='/usr/sbin/multipathd -d'
Oct 10 10:03:35 compute-0 multipathd[241875]: + ARGS=
Oct 10 10:03:35 compute-0 multipathd[241875]: + sudo kolla_copy_cacerts
Oct 10 10:03:35 compute-0 sudo[241904]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:03:35 compute-0 sudo[241904]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:35 compute-0 sudo[241904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:35 compute-0 podman[241882]: 2025-10-10 10:03:35.702110129 +0000 UTC m=+0.065576076 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd)
Oct 10 10:03:35 compute-0 sudo[241904]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-0 multipathd[241875]: + [[ ! -n '' ]]
Oct 10 10:03:35 compute-0 multipathd[241875]: + . kolla_extend_start
Oct 10 10:03:35 compute-0 multipathd[241875]: Running command: '/usr/sbin/multipathd -d'
Oct 10 10:03:35 compute-0 multipathd[241875]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 10:03:35 compute-0 multipathd[241875]: + umask 0022
Oct 10 10:03:35 compute-0 multipathd[241875]: + exec /usr/sbin/multipathd -d
Oct 10 10:03:35 compute-0 systemd[1]: 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-1fad74588483a8c1.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:03:35 compute-0 systemd[1]: 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-1fad74588483a8c1.service: Failed with result 'exit-code'.
Oct 10 10:03:35 compute-0 multipathd[241875]: 3519.348952 | --------start up--------
Oct 10 10:03:35 compute-0 multipathd[241875]: 3519.348979 | read /etc/multipath.conf
Oct 10 10:03:35 compute-0 multipathd[241875]: 3519.356240 | path checkers start up
Oct 10 10:03:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:35 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:36.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:36.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:36 compute-0 python3.9[242065]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:37.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:03:37 compute-0 sudo[242218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtjzgiqoiocddxyxshroivvbyzmdpjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090616.906925-2582-236566491047976/AnsiballZ_command.py'
Oct 10 10:03:37 compute-0 sudo[242218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:37 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:37 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:03:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:03:37 compute-0 python3.9[242220]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:03:37 compute-0 sudo[242218]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:37 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:37 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:03:37 compute-0 ceph-mon[73551]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:37 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:38 compute-0 sudo[242384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuraoyrlodltbvytzrlxygvpsievxclp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090617.8362412-2606-101946898034318/AnsiballZ_systemd.py'
Oct 10 10:03:38 compute-0 sudo[242384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:38.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:38 compute-0 python3.9[242386]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:03:38 compute-0 systemd[1]: Stopping multipathd container...
Oct 10 10:03:38 compute-0 multipathd[241875]: 3522.175885 | exit (signal)
Oct 10 10:03:38 compute-0 multipathd[241875]: 3522.176083 | --------shut down-------
Oct 10 10:03:38 compute-0 systemd[1]: libpod-8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013.scope: Deactivated successfully.
Oct 10 10:03:38 compute-0 podman[242390]: 2025-10-10 10:03:38.581932155 +0000 UTC m=+0.079432076 container died 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:03:38 compute-0 systemd[1]: 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-1fad74588483a8c1.timer: Deactivated successfully.
Oct 10 10:03:38 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013.
Oct 10 10:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-userdata-shm.mount: Deactivated successfully.
Oct 10 10:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c619a596bfa732782378cb2133f30441ef3c167c2812fe426be05ddef3c5b465-merged.mount: Deactivated successfully.
Oct 10 10:03:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 10 10:03:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:38 compute-0 podman[242390]: 2025-10-10 10:03:38.955083607 +0000 UTC m=+0.452583528 container cleanup 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct 10 10:03:38 compute-0 podman[242390]: multipathd
Oct 10 10:03:39 compute-0 podman[242421]: multipathd
Oct 10 10:03:39 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 10 10:03:39 compute-0 systemd[1]: Stopped multipathd container.
Oct 10 10:03:39 compute-0 systemd[1]: Starting multipathd container...
Oct 10 10:03:39 compute-0 podman[242420]: 2025-10-10 10:03:39.088338628 +0000 UTC m=+0.092848941 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:03:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619a596bfa732782378cb2133f30441ef3c167c2812fe426be05ddef3c5b465/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c619a596bfa732782378cb2133f30441ef3c167c2812fe426be05ddef3c5b465/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013.
Oct 10 10:03:39 compute-0 podman[242453]: 2025-10-10 10:03:39.217499248 +0000 UTC m=+0.123316968 container init 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:03:39 compute-0 multipathd[242469]: + sudo -E kolla_set_configs
Oct 10 10:03:39 compute-0 sudo[242486]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:03:39 compute-0 sudo[242486]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:39 compute-0 sudo[242486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:39 compute-0 podman[242453]: 2025-10-10 10:03:39.2524647 +0000 UTC m=+0.158282400 container start 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:03:39 compute-0 podman[242453]: multipathd
Oct 10 10:03:39 compute-0 systemd[1]: Started multipathd container.
Oct 10 10:03:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:39 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:39 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:39 compute-0 multipathd[242469]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:03:39 compute-0 multipathd[242469]: INFO:__main__:Validating config file
Oct 10 10:03:39 compute-0 multipathd[242469]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:03:39 compute-0 multipathd[242469]: INFO:__main__:Writing out command to execute
Oct 10 10:03:39 compute-0 sudo[242486]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-0 multipathd[242469]: ++ cat /run_command
Oct 10 10:03:39 compute-0 sudo[242384]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-0 multipathd[242469]: + CMD='/usr/sbin/multipathd -d'
Oct 10 10:03:39 compute-0 multipathd[242469]: + ARGS=
Oct 10 10:03:39 compute-0 multipathd[242469]: + sudo kolla_copy_cacerts
Oct 10 10:03:39 compute-0 sudo[242521]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:03:39 compute-0 sudo[242521]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:39 compute-0 sudo[242521]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:39 compute-0 sudo[242521]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-0 multipathd[242469]: + [[ ! -n '' ]]
Oct 10 10:03:39 compute-0 multipathd[242469]: + . kolla_extend_start
Oct 10 10:03:39 compute-0 multipathd[242469]: Running command: '/usr/sbin/multipathd -d'
Oct 10 10:03:39 compute-0 multipathd[242469]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 10:03:39 compute-0 multipathd[242469]: + umask 0022
Oct 10 10:03:39 compute-0 multipathd[242469]: + exec /usr/sbin/multipathd -d
Oct 10 10:03:39 compute-0 podman[242472]: 2025-10-10 10:03:39.333982997 +0000 UTC m=+0.138122170 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 10 10:03:39 compute-0 podman[242489]: 2025-10-10 10:03:39.334164063 +0000 UTC m=+0.070925460 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:03:39 compute-0 multipathd[242469]: 3522.968401 | --------start up--------
Oct 10 10:03:39 compute-0 multipathd[242469]: 3522.968421 | read /etc/multipath.conf
Oct 10 10:03:39 compute-0 systemd[1]: 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-6c23001ee93b093c.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:03:39 compute-0 systemd[1]: 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013-6c23001ee93b093c.service: Failed with result 'exit-code'.
Oct 10 10:03:39 compute-0 multipathd[242469]: 3522.975494 | path checkers start up
Oct 10 10:03:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:39 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:39 compute-0 sudo[242684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrtqzxahtdktytcpwvidhtydqfirqic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090619.485915-2630-54882996929640/AnsiballZ_file.py'
Oct 10 10:03:39 compute-0 sudo[242684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:39 compute-0 ceph-mon[73551]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 10 10:03:39 compute-0 python3.9[242686]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:40 compute-0 sudo[242684]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000070s ======
Oct 10 10:03:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000070s
Oct 10 10:03:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:03:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:40 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:03:40 compute-0 sudo[242838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xunrrqpvpwoqgpjxmskrbdpebgftmphr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090620.6096141-2666-169913567968765/AnsiballZ_file.py'
Oct 10 10:03:40 compute-0 sudo[242838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:41 compute-0 python3.9[242840]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:03:41 compute-0 sudo[242838]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:41 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:41 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:41 compute-0 sudo[242990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ununglmibdejjfnegyesgtxtaijerphg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090621.428545-2690-32460236616181/AnsiballZ_modprobe.py'
Oct 10 10:03:41 compute-0 sudo[242990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:41 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:03:41.890 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:03:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:03:41.894 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:03:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:03:41.894 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:03:41 compute-0 python3.9[242992]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 10 10:03:41 compute-0 kernel: Key type psk registered
Oct 10 10:03:42 compute-0 sudo[242990]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:42 compute-0 ceph-mon[73551]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:03:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:42.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:42.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:03:42 compute-0 sudo[243155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvgaewngxjsufmckmtmikzbkwzgecav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090622.339502-2714-7333826597724/AnsiballZ_stat.py'
Oct 10 10:03:42 compute-0 sudo[243155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:42 compute-0 python3.9[243157]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:42 compute-0 sudo[243155]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:43 compute-0 sudo[243279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmusjfyhimkkgeuibgxxjezjakpbvzkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090622.339502-2714-7333826597724/AnsiballZ_copy.py'
Oct 10 10:03:43 compute-0 sudo[243279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:43 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9cc002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:43 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4003730 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:43 compute-0 python3.9[243281]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090622.339502-2714-7333826597724/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:43 compute-0 sudo[243279]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:43 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:44 compute-0 ceph-mon[73551]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:03:44 compute-0 sudo[243432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhrxpzkgafvojcmfnvjukslvbtoedmjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090623.9177222-2762-159927793664207/AnsiballZ_lineinfile.py'
Oct 10 10:03:44 compute-0 sudo[243432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:44.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:44 compute-0 python3.9[243434]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:44 compute-0 sudo[243432]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:45 compute-0 sudo[243585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjfqfuvfbpdeamyqncsxwnjsmskamiao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090624.7892585-2786-100705792250616/AnsiballZ_systemd.py'
Oct 10 10:03:45 compute-0 sudo[243585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:45 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:45 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:45 compute-0 python3.9[243587]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:03:45 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 10:03:45 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 10 10:03:45 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 10 10:03:45 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 10 10:03:45 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 10 10:03:45 compute-0 sudo[243585]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:45 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:46 compute-0 ceph-mon[73551]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:46 compute-0 sudo[243742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfciwdlcczecrgedwbxowvxkdiznaxcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090625.8589578-2810-138420646746054/AnsiballZ_setup.py'
Oct 10 10:03:46 compute-0 sudo[243742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:46.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100346 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:03:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:03:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:46.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:03:46 compute-0 python3.9[243744]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 10:03:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:46 compute-0 sudo[243742]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:47.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:03:47 compute-0 podman[243789]: 2025-10-10 10:03:47.225649712 +0000 UTC m=+0.057804075 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:03:47 compute-0 sudo[243845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvcaynticlxwqbgjawjddtzhwymlgzzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090625.8589578-2810-138420646746054/AnsiballZ_dnf.py'
Oct 10 10:03:47 compute-0 sudo[243845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:47 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:47 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:03:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:03:47 compute-0 python3.9[243847]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 10:03:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:47 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:48 compute-0 ceph-mon[73551]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:49 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:49 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:49 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:50 compute-0 ceph-mon[73551]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:50.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:51 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:51 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0003af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:51 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:52 compute-0 ceph-mon[73551]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:03:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:03:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:52.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:53 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:53 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:53 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0003af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:54 compute-0 ceph-mon[73551]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:54.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:54.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:54 compute-0 systemd[1]: Reloading.
Oct 10 10:03:54 compute-0 systemd-sysv-generator[243889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:54 compute-0 systemd-rc-local-generator[243886]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:54 compute-0 systemd[1]: Reloading.
Oct 10 10:03:54 compute-0 systemd-rc-local-generator[243923]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:54 compute-0 systemd-sysv-generator[243926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:55 compute-0 sudo[243931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:55 compute-0 sudo[243931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:55 compute-0 sudo[243931]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:55 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 10:03:55 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 10:03:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:55 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:55 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:55 compute-0 lvm[243991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:03:55 compute-0 lvm[243991]: VG ceph_vg0 finished
Oct 10 10:03:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 10:03:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 10 10:03:55 compute-0 systemd[1]: Reloading.
Oct 10 10:03:55 compute-0 systemd-sysv-generator[244045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:55 compute-0 systemd-rc-local-generator[244042]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:55 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 10:03:56 compute-0 ceph-mon[73551]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:56.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:56 compute-0 sudo[243845]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:03:57.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:03:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 10:03:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 10 10:03:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.838s CPU time.
Oct 10 10:03:57 compute-0 systemd[1]: run-rd2bc7825363a47ea95859bde4684530c.service: Deactivated successfully.
Oct 10 10:03:57 compute-0 ceph-mon[73551]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:57 compute-0 sudo[245334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxoonhpbxarlrdeajwpihthyuvwmqbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090636.8910618-2846-133185277189412/AnsiballZ_file.py'
Oct 10 10:03:57 compute-0 sudo[245334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:57 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9f0003af0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:57 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9d4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:03:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:03:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:03:57 compute-0 python3.9[245336]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:57 compute-0 sudo[245334]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:57 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:58 compute-0 python3.9[245486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 10:03:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:58.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:03:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:59 compute-0 sudo[245642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cekvhqbalndajjfftvnxzxjyqxkuodil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090638.782074-2898-217779825654485/AnsiballZ_file.py'
Oct 10 10:03:59 compute-0 sudo[245642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:59 compute-0 python3.9[245644]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:59 compute-0 sudo[245642]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:59 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:59 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 10 10:03:59 compute-0 ceph-mon[73551]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:03:59 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:04:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:00.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:04:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:00.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:00 compute-0 sudo[245796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzdbydgeedohkaxpjngbkjmlvjorvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090639.9121315-2931-73571664674158/AnsiballZ_systemd_service.py'
Oct 10 10:04:00 compute-0 sudo[245796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100400 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:00 compute-0 python3.9[245798]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:04:00 compute-0 systemd[1]: Reloading.
Oct 10 10:04:01 compute-0 systemd-rc-local-generator[245826]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:04:01 compute-0 systemd-sysv-generator[245829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:04:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:01 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:01 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:04:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:01 compute-0 sudo[245796]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:01 compute-0 ceph-mon[73551]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:01 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:02 compute-0 python3.9[245984]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:04:02 compute-0 network[246002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:04:02 compute-0 network[246003]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:04:02 compute-0 network[246004]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:04:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:02.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:04:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:03 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:03 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:03 compute-0 ceph-mon[73551]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:04:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:03 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:04:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:04:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000035s ======
Oct 10 10:04:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:04.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Oct 10 10:04:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:05 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:05 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:05 compute-0 ceph-mon[73551]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:05 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:06.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:07.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:04:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:07 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:07 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:07] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:07 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:07 compute-0 ceph-mon[73551]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:08 compute-0 sudo[246288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csdzippyqppcyvwdcnmhnogwtfdzpvqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090648.6674714-2988-240862579826999/AnsiballZ_systemd_service.py'
Oct 10 10:04:08 compute-0 sudo[246288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:09 compute-0 podman[246291]: 2025-10-10 10:04:09.220426797 +0000 UTC m=+0.067593881 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:04:09 compute-0 python3.9[246290]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:09 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:09 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:09 compute-0 sudo[246288]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:09 compute-0 sudo[246489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atlnrojaezhcpfgyfaubrqvjvyuvpacr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090649.456644-2988-132005536955267/AnsiballZ_systemd_service.py'
Oct 10 10:04:09 compute-0 sudo[246489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:09 compute-0 podman[246435]: 2025-10-10 10:04:09.781391128 +0000 UTC m=+0.069128330 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:04:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:09 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:09 compute-0 ceph-mon[73551]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:09 compute-0 podman[246436]: 2025-10-10 10:04:09.838289075 +0000 UTC m=+0.120633955 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:04:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:10 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:04:10 compute-0 python3.9[246500]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:10 compute-0 sudo[246489]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:10 compute-0 sudo[246660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tddcmkxbsgjwgloffdznkgvcizagxmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090650.301908-2988-80242583304095/AnsiballZ_systemd_service.py'
Oct 10 10:04:10 compute-0 sudo[246660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:10 compute-0 python3.9[246662]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:11 compute-0 sudo[246660]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:11 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:11 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:11 compute-0 sudo[246814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxlzzzcdesftcmqhkmviguviystlpef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090651.1714158-2988-130689872445558/AnsiballZ_systemd_service.py'
Oct 10 10:04:11 compute-0 sudo[246814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:11 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:11 compute-0 ceph-mon[73551]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:11 compute-0 python3.9[246816]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:11 compute-0 sudo[246814]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:12.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:12.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:12 compute-0 sudo[246968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcuqyzjooecurcxqhdnvswfmungcoels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090652.0431073-2988-137193045169171/AnsiballZ_systemd_service.py'
Oct 10 10:04:12 compute-0 sudo[246968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:12 compute-0 python3.9[246970]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:12 compute-0 sudo[246968]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:13 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:04:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:13 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:04:13 compute-0 sudo[247122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nauzvhxwzinpsblzxmnxcymxyeovrtoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090652.9466846-2988-268629463140515/AnsiballZ_systemd_service.py'
Oct 10 10:04:13 compute-0 sudo[247122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:13 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:13 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:13 compute-0 python3.9[247124]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:13 compute-0 sudo[247122]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:13 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:13 compute-0 ceph-mon[73551]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:14 compute-0 sudo[247276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvpcmlrujjfdwepvtzxtoparuyzrjdya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090653.8559482-2988-258024465017154/AnsiballZ_systemd_service.py'
Oct 10 10:04:14 compute-0 sudo[247276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:14.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:14 compute-0 python3.9[247278]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:14 compute-0 sudo[247276]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:14 compute-0 sudo[247430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biwxpawausgpfdenazdnoudsfgrhjyfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090654.615759-2988-240082624367648/AnsiballZ_systemd_service.py'
Oct 10 10:04:14 compute-0 sudo[247430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:15 compute-0 python3.9[247432]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:15 compute-0 sudo[247430]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:15 compute-0 sudo[247434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:15 compute-0 sudo[247434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:15 compute-0 sudo[247434]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:15 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:15 compute-0 ceph-mon[73551]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:16 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:04:16
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'volumes', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'images']
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:04:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:04:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:16.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:16 compute-0 sudo[247609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybciarbljztvhimbiefsrmbsmafhkegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090656.0995047-3165-60422534062685/AnsiballZ_file.py'
Oct 10 10:04:16 compute-0 sudo[247609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:04:16 compute-0 python3.9[247611]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:16 compute-0 sudo[247609]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:04:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:17 compute-0 sudo[247762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxskfbifdofahyepawaklanaitfuties ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090656.7368908-3165-105274180975305/AnsiballZ_file.py'
Oct 10 10:04:17 compute-0 sudo[247762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:17.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:04:17 compute-0 python3.9[247764]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:17 compute-0 sudo[247762]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:17 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:17 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:04:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:04:17 compute-0 sudo[247930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlccmzroljylsbyscmwbfjskigghynxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090657.3616953-3165-96387884910298/AnsiballZ_file.py'
Oct 10 10:04:17 compute-0 sudo[247930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:17 compute-0 podman[247888]: 2025-10-10 10:04:17.685825158 +0000 UTC m=+0.063305503 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 10 10:04:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:17 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:17 compute-0 python3.9[247936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:17 compute-0 sudo[247930]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:17 compute-0 ceph-mon[73551]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:04:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:18.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:18 compute-0 sudo[248087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmfjazwerrqakgazsigvrshwprgsdhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090658.0307643-3165-187400951475412/AnsiballZ_file.py'
Oct 10 10:04:18 compute-0 sudo[248087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:18 compute-0 python3.9[248089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:18 compute-0 sudo[248087]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:04:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:19 compute-0 sudo[248240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfppliqsquzlumemlqflyhekeuuvkccv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090658.7267594-3165-43146258629172/AnsiballZ_file.py'
Oct 10 10:04:19 compute-0 sudo[248240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:19 compute-0 python3.9[248242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:19 compute-0 sudo[248240]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:19 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:19 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:19 compute-0 sudo[248392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whasvkoqwvpkwbdbpzaszgbnpeiwwmzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090659.4185278-3165-144807970247071/AnsiballZ_file.py'
Oct 10 10:04:19 compute-0 sudo[248392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:19 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:19 compute-0 ceph-mon[73551]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:04:19 compute-0 python3.9[248394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:19 compute-0 sudo[248392]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:20 compute-0 sudo[248545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvpfpqymevnoodnbbwpzrbojfghbvgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090660.0817-3165-155150188526974/AnsiballZ_file.py'
Oct 10 10:04:20 compute-0 sudo[248545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:20 compute-0 python3.9[248547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:20 compute-0 sudo[248545]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:21 compute-0 sudo[248698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlrnbhyiqrruehhutybglhuapondvyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090660.762943-3165-173021686117494/AnsiballZ_file.py'
Oct 10 10:04:21 compute-0 sudo[248698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:21 compute-0 python3.9[248700]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:21 compute-0 sudo[248698]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:21 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:21 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9e0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:21 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:21 compute-0 ceph-mon[73551]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:22.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:22 compute-0 sudo[248851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrdnghxokcioljutktkvvqxjnlxqueo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090662.0630703-3336-226644128769679/AnsiballZ_file.py'
Oct 10 10:04:22 compute-0 sudo[248851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:22 compute-0 python3.9[248853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:22 compute-0 sudo[248851]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100422 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:23 compute-0 sudo[249004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdekiydvviitvmeiilxdqvpfgwadxfmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090662.7691903-3336-254249794620243/AnsiballZ_file.py'
Oct 10 10:04:23 compute-0 sudo[249004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:23 compute-0 python3.9[249006]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:23 compute-0 sudo[249004]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:23 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:23 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:23 compute-0 sudo[249156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wddzfhnynibxhkkjiweyuesbldvmrnqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090663.4434786-3336-195293022755554/AnsiballZ_file.py'
Oct 10 10:04:23 compute-0 sudo[249156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:23 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:23 compute-0 ceph-mon[73551]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:23 compute-0 python3.9[249158]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:23 compute-0 sudo[249156]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:24.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:24 compute-0 sudo[249309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkfsjsflwdjefukhhccjsvhqsiofvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090664.1225507-3336-84457271491694/AnsiballZ_file.py'
Oct 10 10:04:24 compute-0 sudo[249309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:24 compute-0 python3.9[249311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:24 compute-0 sudo[249309]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:25 compute-0 sudo[249462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iagrdsiqqygzflcrjkdjbnnmqukmtvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090664.7636368-3336-110294218809403/AnsiballZ_file.py'
Oct 10 10:04:25 compute-0 sudo[249462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:25 compute-0 python3.9[249464]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:25 compute-0 sudo[249462]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:25 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9bc003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:25 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:25 compute-0 sudo[249614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jajomdyzmkoqchydpamykgauehszyfxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090665.3786771-3336-219015218663128/AnsiballZ_file.py'
Oct 10 10:04:25 compute-0 sudo[249614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:25 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:25 compute-0 python3.9[249616]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:25 compute-0 sudo[249614]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:25 compute-0 ceph-mon[73551]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:25 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 10 10:04:25 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:25.994777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:04:25 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 10 10:04:25 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090665994838, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 855, "num_deletes": 251, "total_data_size": 1341601, "memory_usage": 1362328, "flush_reason": "Manual Compaction"}
Oct 10 10:04:25 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666011191, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1327534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19101, "largest_seqno": 19955, "table_properties": {"data_size": 1323321, "index_size": 1929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9372, "raw_average_key_size": 19, "raw_value_size": 1314853, "raw_average_value_size": 2727, "num_data_blocks": 86, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090594, "oldest_key_time": 1760090594, "file_creation_time": 1760090665, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 16703 microseconds, and 5615 cpu microseconds.
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011473) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1327534 bytes OK
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011583) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.013571) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.013603) EVENT_LOG_v1 {"time_micros": 1760090666013592, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.013635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1337525, prev total WAL file size 1337525, number of live WAL files 2.
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.015124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1296KB)], [41(13MB)]
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666015168, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15030243, "oldest_snapshot_seqno": -1}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4968 keys, 12865325 bytes, temperature: kUnknown
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666132917, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12865325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12831305, "index_size": 20470, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126756, "raw_average_key_size": 25, "raw_value_size": 12740304, "raw_average_value_size": 2564, "num_data_blocks": 839, "num_entries": 4968, "num_filter_entries": 4968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.133506) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12865325 bytes
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.134795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 109.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.0) write-amplify(9.7) OK, records in: 5484, records dropped: 516 output_compression: NoCompression
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.134828) EVENT_LOG_v1 {"time_micros": 1760090666134812, "job": 20, "event": "compaction_finished", "compaction_time_micros": 118054, "compaction_time_cpu_micros": 42591, "output_level": 6, "num_output_files": 1, "total_output_size": 12865325, "num_input_records": 5484, "num_output_records": 4968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666135472, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666140220, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.014994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.140399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.140407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.140409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.140411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:04:26.140414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-0 sudo[249767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlwwpnspmtjkzoxnvtungwhisrclepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090665.9822009-3336-149133453797373/AnsiballZ_file.py'
Oct 10 10:04:26 compute-0 sudo[249767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.003000097s ======
Oct 10 10:04:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:26.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000097s
Oct 10 10:04:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:26 compute-0 python3.9[249769]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:26 compute-0 sudo[249767]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:26 compute-0 sudo[249920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gawurtasnhpdfpjyujknecrmnwasefme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090666.6770635-3336-163774507798410/AnsiballZ_file.py'
Oct 10 10:04:26 compute-0 sudo[249920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:27.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:04:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:04:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:27.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:04:27 compute-0 python3.9[249922]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:27 compute-0 sudo[249920]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:27 compute-0 kernel: ganesha.nfsd[240361]: segfault at 50 ip 00007fba9fef132e sp 00007fba53ffe210 error 4 in libntirpc.so.5.8[7fba9fed6000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 10 10:04:27 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:04:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[238219]: 10/10/2025 10:04:27 : epoch 68e8d9e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9c0003c10 fd 39 proxy ignored for local
Oct 10 10:04:27 compute-0 systemd[1]: Started Process Core Dump (PID 249947/UID 0).
Oct 10 10:04:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:04:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:04:28 compute-0 ceph-mon[73551]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:28 compute-0 sudo[250075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erlyjdwecetzullhkweveyfficgfkhii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090667.9695354-3510-256137527595947/AnsiballZ_command.py'
Oct 10 10:04:28 compute-0 sudo[250075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:28.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:28 compute-0 python3.9[250077]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:28 compute-0 sudo[250075]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:28 compute-0 systemd-coredump[249948]: Process 238223 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007fba9fef132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:04:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:28 compute-0 systemd[1]: systemd-coredump@5-249947-0.service: Deactivated successfully.
Oct 10 10:04:28 compute-0 systemd[1]: systemd-coredump@5-249947-0.service: Consumed 1.310s CPU time.
Oct 10 10:04:28 compute-0 podman[250111]: 2025-10-10 10:04:28.815142064 +0000 UTC m=+0.046105725 container died b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f48d6014d3e884f029ad7b31eaf1237bb0a0a7d513529eeee5469a7e436893-merged.mount: Deactivated successfully.
Oct 10 10:04:28 compute-0 podman[250111]: 2025-10-10 10:04:28.88515668 +0000 UTC m=+0.116120341 container remove b83c6f2774a66a3fbaea36bfd3dd23a87ffe1057ea900137c30878ae83688bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:28 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:04:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:04:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.692s CPU time.
Oct 10 10:04:29 compute-0 python3.9[250278]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:04:30 compute-0 ceph-mon[73551]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:30 compute-0 sudo[250429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvkoosoecnucccqzrrkjtgrhzunhpmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090669.9754677-3564-74167863780593/AnsiballZ_systemd_service.py'
Oct 10 10:04:30 compute-0 sudo[250429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:30 compute-0 python3.9[250431]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:04:30 compute-0 systemd[1]: Reloading.
Oct 10 10:04:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:30 compute-0 systemd-rc-local-generator[250457]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:04:30 compute-0 systemd-sysv-generator[250460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:04:31 compute-0 sudo[250429]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:04:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:31 compute-0 sudo[250617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbmimiadxlnhswymtyujawkenniuqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090671.269484-3588-137383103212467/AnsiballZ_command.py'
Oct 10 10:04:31 compute-0 sudo[250617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:31 compute-0 python3.9[250619]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:31 compute-0 sudo[250617]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-0 ceph-mon[73551]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:32 compute-0 sudo[250771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqpeftkrdhpoyrchevaqkijjzamgoqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090671.9449306-3588-243392173250623/AnsiballZ_command.py'
Oct 10 10:04:32 compute-0 sudo[250771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:32.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:32 compute-0 python3.9[250773]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:32 compute-0 sudo[250774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:32 compute-0 sudo[250774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:32 compute-0 sudo[250774]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-0 sudo[250771]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-0 sudo[250800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:04:32 compute-0 sudo[250800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:32 compute-0 sudo[250800]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:04:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:04:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:32 compute-0 sudo[250994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtwdjrmpqtuqrxgfsojkpfmvpzewvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090672.612738-3588-262930895130725/AnsiballZ_command.py'
Oct 10 10:04:32 compute-0 sudo[250994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:04:32 compute-0 sudo[250995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:32 compute-0 sudo[250995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:32 compute-0 sudo[250995]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:04:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 sudo[251022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:04:33 compute-0 sudo[251022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:33 compute-0 python3.9[251001]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:33 compute-0 sudo[250994]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100433 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:33 compute-0 sudo[251022]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-0 sudo[251228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitlcvnlwommvprmphjtwrnfnmlgczsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090673.2736337-3588-154369171162655/AnsiballZ_command.py'
Oct 10 10:04:33 compute-0 sudo[251228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:33 compute-0 python3.9[251230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:33 compute-0 sudo[251228]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:04:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:33 compute-0 sudo[251300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:33 compute-0 sudo[251300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:33 compute-0 sudo[251300]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:34 compute-0 sudo[251345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:04:34 compute-0 sudo[251345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:34 compute-0 sudo[251432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqlrjggcgvsayktexwdqkrltiuhuvznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090673.888182-3588-131742375491473/AnsiballZ_command.py'
Oct 10 10:04:34 compute-0 sudo[251432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100434 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:34 compute-0 python3.9[251434]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:34.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:34 compute-0 sudo[251432]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:34 compute-0 podman[251477]: 2025-10-10 10:04:34.463543111 +0000 UTC m=+0.044562535 container create 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:04:34 compute-0 systemd[1]: Started libpod-conmon-9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786.scope.
Oct 10 10:04:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:34 compute-0 podman[251477]: 2025-10-10 10:04:34.445812465 +0000 UTC m=+0.026831919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:34 compute-0 podman[251477]: 2025-10-10 10:04:34.554303041 +0000 UTC m=+0.135322485 container init 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 10:04:34 compute-0 podman[251477]: 2025-10-10 10:04:34.563022619 +0000 UTC m=+0.144042043 container start 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:34 compute-0 podman[251477]: 2025-10-10 10:04:34.5674456 +0000 UTC m=+0.148465074 container attach 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:04:34 compute-0 nice_curran[251541]: 167 167
Oct 10 10:04:34 compute-0 systemd[1]: libpod-9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786.scope: Deactivated successfully.
Oct 10 10:04:34 compute-0 podman[251572]: 2025-10-10 10:04:34.610719733 +0000 UTC m=+0.028251944 container died 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 10:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c021390ff2b2595878d273afa8597a5daaa77685d28c82e093e656145be7a659-merged.mount: Deactivated successfully.
Oct 10 10:04:34 compute-0 podman[251572]: 2025-10-10 10:04:34.652006802 +0000 UTC m=+0.069539003 container remove 9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_curran, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:04:34 compute-0 systemd[1]: libpod-conmon-9000909e29812b2556081ca307cd9bf91d0ab4334bcaf84c3ab875ada7758786.scope: Deactivated successfully.
Oct 10 10:04:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:34 compute-0 sudo[251670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfehbakhbqjinobzkznxjwnixeuafmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090674.5121946-3588-157822777342728/AnsiballZ_command.py'
Oct 10 10:04:34 compute-0 sudo[251670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:34 compute-0 podman[251671]: 2025-10-10 10:04:34.874594452 +0000 UTC m=+0.057206558 container create 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:04:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:04:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:04:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:34 compute-0 systemd[1]: Started libpod-conmon-74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf.scope.
Oct 10 10:04:34 compute-0 podman[251671]: 2025-10-10 10:04:34.851557677 +0000 UTC m=+0.034169833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:34 compute-0 podman[251671]: 2025-10-10 10:04:34.974542776 +0000 UTC m=+0.157154892 container init 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:34 compute-0 podman[251671]: 2025-10-10 10:04:34.983314046 +0000 UTC m=+0.165926152 container start 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:04:34 compute-0 podman[251671]: 2025-10-10 10:04:34.986665313 +0000 UTC m=+0.169277449 container attach 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:04:35 compute-0 python3.9[251679]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:35 compute-0 sudo[251670]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-0 objective_gagarin[251689]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:04:35 compute-0 objective_gagarin[251689]: --> All data devices are unavailable
Oct 10 10:04:35 compute-0 systemd[1]: libpod-74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf.scope: Deactivated successfully.
Oct 10 10:04:35 compute-0 podman[251671]: 2025-10-10 10:04:35.391748274 +0000 UTC m=+0.574360380 container died 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:04:35 compute-0 sudo[251801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:35 compute-0 sudo[251801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:35 compute-0 sudo[251801]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c7c8f5504ea78b093ad54a2aac9c85ff55adfcae536b98c422bfe7da8cb6fa-merged.mount: Deactivated successfully.
Oct 10 10:04:35 compute-0 podman[251671]: 2025-10-10 10:04:35.448870899 +0000 UTC m=+0.631483005 container remove 74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gagarin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:04:35 compute-0 systemd[1]: libpod-conmon-74e6bd24ab6ae46fbabe7935d8959b77e5154eba0c9589f0cec7ea9e699d6cbf.scope: Deactivated successfully.
Oct 10 10:04:35 compute-0 sudo[251345]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-0 sudo[251889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhnbqgzdapmqhabklpivffvnjoqgitnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090675.2062256-3588-280881844924420/AnsiballZ_command.py'
Oct 10 10:04:35 compute-0 sudo[251889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:35 compute-0 sudo[251891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:35 compute-0 sudo[251891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:35 compute-0 sudo[251891]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-0 sudo[251917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:04:35 compute-0 sudo[251917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:35 compute-0 python3.9[251893]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:35 compute-0 sudo[251889]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-0 ceph-mon[73551]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.080916351 +0000 UTC m=+0.042480988 container create 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:04:36 compute-0 systemd[1]: Started libpod-conmon-0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00.scope.
Oct 10 10:04:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:36 compute-0 sudo[252154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbyfkrggqtxncqplgaiesfrbhrqsdkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090675.8697758-3588-75672009941270/AnsiballZ_command.py'
Oct 10 10:04:36 compute-0 sudo[252154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.064358831 +0000 UTC m=+0.025923488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.163572691 +0000 UTC m=+0.125137348 container init 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.170793172 +0000 UTC m=+0.132357809 container start 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.175211853 +0000 UTC m=+0.136776490 container attach 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:36 compute-0 boring_booth[252148]: 167 167
Oct 10 10:04:36 compute-0 systemd[1]: libpod-0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00.scope: Deactivated successfully.
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.177676132 +0000 UTC m=+0.139240769 container died 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2256f51940612f9f9c879cd9387d8c576b6f1e82a45cce3f2d4da023ed19109b-merged.mount: Deactivated successfully.
Oct 10 10:04:36 compute-0 podman[252084]: 2025-10-10 10:04:36.21833118 +0000 UTC m=+0.179895817 container remove 0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_booth, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:36 compute-0 systemd[1]: libpod-conmon-0e724ff3e2400dd7f613e79f6b831c1440d0dd9b3d80144867f991ad382bac00.scope: Deactivated successfully.
Oct 10 10:04:36 compute-0 python3.9[252156]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:36.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:36 compute-0 sudo[252154]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.383810437 +0000 UTC m=+0.056797615 container create f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 10 10:04:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:36 compute-0 systemd[1]: Started libpod-conmon-f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5.scope.
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.356147833 +0000 UTC m=+0.029135061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833469e1f5821f05f3ca50d8178c3c24637eefdd1cc22d5018b70e4e6b2bef6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833469e1f5821f05f3ca50d8178c3c24637eefdd1cc22d5018b70e4e6b2bef6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833469e1f5821f05f3ca50d8178c3c24637eefdd1cc22d5018b70e4e6b2bef6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833469e1f5821f05f3ca50d8178c3c24637eefdd1cc22d5018b70e4e6b2bef6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.47655236 +0000 UTC m=+0.149539528 container init f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.489747252 +0000 UTC m=+0.162734450 container start f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.493941606 +0000 UTC m=+0.166928794 container attach f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:04:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:36 compute-0 zealous_buck[252217]: {
Oct 10 10:04:36 compute-0 zealous_buck[252217]:     "0": [
Oct 10 10:04:36 compute-0 zealous_buck[252217]:         {
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "devices": [
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "/dev/loop3"
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             ],
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "lv_name": "ceph_lv0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "lv_size": "21470642176",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "name": "ceph_lv0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "tags": {
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.cluster_name": "ceph",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.crush_device_class": "",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.encrypted": "0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.osd_id": "0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.type": "block",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.vdo": "0",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:                 "ceph.with_tpm": "0"
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             },
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "type": "block",
Oct 10 10:04:36 compute-0 zealous_buck[252217]:             "vg_name": "ceph_vg0"
Oct 10 10:04:36 compute-0 zealous_buck[252217]:         }
Oct 10 10:04:36 compute-0 zealous_buck[252217]:     ]
Oct 10 10:04:36 compute-0 zealous_buck[252217]: }
Oct 10 10:04:36 compute-0 systemd[1]: libpod-f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5.scope: Deactivated successfully.
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.832829212 +0000 UTC m=+0.505816410 container died f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 10 10:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-833469e1f5821f05f3ca50d8178c3c24637eefdd1cc22d5018b70e4e6b2bef6f-merged.mount: Deactivated successfully.
Oct 10 10:04:36 compute-0 podman[252178]: 2025-10-10 10:04:36.882370604 +0000 UTC m=+0.555357762 container remove f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:36 compute-0 systemd[1]: libpod-conmon-f85f087214a044d3a5d85f15f53e5f313674253c1d4dc945b2d03c6434a04ec5.scope: Deactivated successfully.
Oct 10 10:04:36 compute-0 sudo[251917]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:37 compute-0 sudo[252241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:37 compute-0 sudo[252241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:37 compute-0 sudo[252241]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:37.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:04:37 compute-0 sudo[252266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:04:37 compute-0 sudo[252266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:37] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.499257712 +0000 UTC m=+0.040715402 container create 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:37 compute-0 systemd[1]: Started libpod-conmon-6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6.scope.
Oct 10 10:04:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.483010593 +0000 UTC m=+0.024468303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.578591167 +0000 UTC m=+0.120048897 container init 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.58651527 +0000 UTC m=+0.127972960 container start 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.590882269 +0000 UTC m=+0.132340149 container attach 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:04:37 compute-0 stupefied_pare[252348]: 167 167
Oct 10 10:04:37 compute-0 systemd[1]: libpod-6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6.scope: Deactivated successfully.
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.595520148 +0000 UTC m=+0.136977838 container died 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88c0d058ea1b7550985472089989574d5b99fdf91531195da82606d8dc7e585-merged.mount: Deactivated successfully.
Oct 10 10:04:37 compute-0 podman[252332]: 2025-10-10 10:04:37.63411115 +0000 UTC m=+0.175568850 container remove 6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:37 compute-0 systemd[1]: libpod-conmon-6aeaa4c79b47911a4b9305cf657e5abc375624bdbea3773771771fff5b1615a6.scope: Deactivated successfully.
Oct 10 10:04:37 compute-0 podman[252371]: 2025-10-10 10:04:37.843847571 +0000 UTC m=+0.064740180 container create 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:37 compute-0 systemd[1]: Started libpod-conmon-1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a.scope.
Oct 10 10:04:37 compute-0 podman[252371]: 2025-10-10 10:04:37.814214243 +0000 UTC m=+0.035106912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f38aa21f22d2de2484b3e6bab3c65e8807f8327bf524f35c61481094e54a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f38aa21f22d2de2484b3e6bab3c65e8807f8327bf524f35c61481094e54a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f38aa21f22d2de2484b3e6bab3c65e8807f8327bf524f35c61481094e54a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f38aa21f22d2de2484b3e6bab3c65e8807f8327bf524f35c61481094e54a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:37 compute-0 ceph-mon[73551]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:37 compute-0 podman[252371]: 2025-10-10 10:04:37.950077604 +0000 UTC m=+0.170970213 container init 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:04:37 compute-0 podman[252371]: 2025-10-10 10:04:37.965453115 +0000 UTC m=+0.186345694 container start 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:04:37 compute-0 podman[252371]: 2025-10-10 10:04:37.970002161 +0000 UTC m=+0.190894780 container attach 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:04:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:38 compute-0 sudo[252580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwkoxofpbpnjldbpochbmbiqaqwcyolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090678.2723517-3795-77018820408121/AnsiballZ_file.py'
Oct 10 10:04:38 compute-0 sudo[252580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:38 compute-0 lvm[252591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:04:38 compute-0 lvm[252591]: VG ceph_vg0 finished
Oct 10 10:04:38 compute-0 angry_mcclintock[252387]: {}
Oct 10 10:04:38 compute-0 podman[252371]: 2025-10-10 10:04:38.75504043 +0000 UTC m=+0.975933009 container died 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:38 compute-0 systemd[1]: libpod-1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a.scope: Deactivated successfully.
Oct 10 10:04:38 compute-0 systemd[1]: libpod-1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a.scope: Consumed 1.270s CPU time.
Oct 10 10:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e74f38aa21f22d2de2484b3e6bab3c65e8807f8327bf524f35c61481094e54a1-merged.mount: Deactivated successfully.
Oct 10 10:04:38 compute-0 podman[252371]: 2025-10-10 10:04:38.811940338 +0000 UTC m=+1.032832927 container remove 1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:38 compute-0 python3.9[252584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:38 compute-0 systemd[1]: libpod-conmon-1f98a0850b75246ddf2d54789dfeacdb11d07d393afc41b9e9ea6b693ea1801a.scope: Deactivated successfully.
Oct 10 10:04:38 compute-0 sudo[252266]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:38 compute-0 sudo[252580]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:04:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:04:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:38 compute-0 sudo[252610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:04:38 compute-0 sudo[252610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:38 compute-0 sudo[252610]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 6.
Oct 10 10:04:39 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:04:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.692s CPU time.
Oct 10 10:04:39 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:04:39 compute-0 sudo[252820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrjiqzdjhjpocvscnipenefvfeknqici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090679.0064745-3795-25106354987801/AnsiballZ_file.py'
Oct 10 10:04:39 compute-0 sudo[252820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:39 compute-0 podman[252772]: 2025-10-10 10:04:39.351813985 +0000 UTC m=+0.081774854 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 10:04:39 compute-0 podman[252846]: 2025-10-10 10:04:39.427935597 +0000 UTC m=+0.047073505 container create c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dcde97246b697626d670fb7e173e5d7238bfb022af9eecbd981b5d1f028f0f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dcde97246b697626d670fb7e173e5d7238bfb022af9eecbd981b5d1f028f0f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dcde97246b697626d670fb7e173e5d7238bfb022af9eecbd981b5d1f028f0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dcde97246b697626d670fb7e173e5d7238bfb022af9eecbd981b5d1f028f0f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-0 podman[252846]: 2025-10-10 10:04:39.500241997 +0000 UTC m=+0.119379905 container init c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:04:39 compute-0 podman[252846]: 2025-10-10 10:04:39.410851291 +0000 UTC m=+0.029989229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:39 compute-0 podman[252846]: 2025-10-10 10:04:39.507105616 +0000 UTC m=+0.126243534 container start c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 10:04:39 compute-0 bash[252846]: c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:04:39 compute-0 python3.9[252829]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:39 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:04:39 compute-0 sudo[252820]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:04:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:04:39 compute-0 ceph-mon[73551]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:39 compute-0 sudo[253084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoydinkiezktfwepyboubxxvefhlnou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090679.689338-3795-62672701160764/AnsiballZ_file.py'
Oct 10 10:04:39 compute-0 sudo[253084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:40 compute-0 podman[253028]: 2025-10-10 10:04:40.006576782 +0000 UTC m=+0.070623336 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:04:40 compute-0 podman[253029]: 2025-10-10 10:04:40.031566191 +0000 UTC m=+0.094079847 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 10:04:40 compute-0 python3.9[253096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:40 compute-0 sudo[253084]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:04:40 compute-0 sudo[253254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzpfnrodxeylldiztzaanaleoicyxuuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090680.5616796-3861-267846489734201/AnsiballZ_file.py'
Oct 10 10:04:40 compute-0 sudo[253254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:41 compute-0 python3.9[253256]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:41 compute-0 sudo[253254]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:41 compute-0 sudo[253406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfscepkabifpvxdzmbuxleuwjrhxhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090681.2949471-3861-59371809497548/AnsiballZ_file.py'
Oct 10 10:04:41 compute-0 sudo[253406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:41 compute-0 python3.9[253408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:41 compute-0 sudo[253406]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:41 compute-0 ceph-mon[73551]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:04:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:04:41.890 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:04:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:04:41.890 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:04:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:04:41.890 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:04:42 compute-0 sudo[253559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvpapvmsfapidywultnmshglkxnfjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090681.9842718-3861-34508426596468/AnsiballZ_file.py'
Oct 10 10:04:42 compute-0 sudo[253559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:42.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:42 compute-0 python3.9[253561]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:42 compute-0 sudo[253559]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:42 compute-0 sudo[253712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcdvughxsbvyjroieslexiphyypvcwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090682.6444101-3861-132811353323068/AnsiballZ_file.py'
Oct 10 10:04:42 compute-0 sudo[253712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:43 compute-0 python3.9[253714]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:43 compute-0 sudo[253712]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:43 compute-0 sudo[253864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbinkvrxegxjxontvyunrzyuqsqqblfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090683.311405-3861-253007566380335/AnsiballZ_file.py'
Oct 10 10:04:43 compute-0 sudo[253864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:43 compute-0 python3.9[253866]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:43 compute-0 sudo[253864]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:43 compute-0 ceph-mon[73551]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:44 compute-0 sudo[254017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjjrolhzjhlolveeztqnticurajwmti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090683.945227-3861-158017422919260/AnsiballZ_file.py'
Oct 10 10:04:44 compute-0 sudo[254017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:44 compute-0 python3.9[254019]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:44 compute-0 sudo[254017]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:45 compute-0 sudo[254170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-catgibwafbpyttmuspymjdxwvzxoysgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090684.7147086-3861-78066182482397/AnsiballZ_file.py'
Oct 10 10:04:45 compute-0 sudo[254170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:45 compute-0 python3.9[254172]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:45 compute-0 sudo[254170]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:04:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:04:45 compute-0 sudo[254322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esitizyphrokogonofjptvyuuijiapen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090685.424802-3861-118831251370652/AnsiballZ_file.py'
Oct 10 10:04:45 compute-0 sudo[254322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:45 compute-0 python3.9[254324]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:45 compute-0 ceph-mon[73551]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:45 compute-0 sudo[254322]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:04:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:46.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:04:46 compute-0 sudo[254475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voyjczwmtvjmwgzeyfmcuatbobtdrudh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090686.106723-3861-141339124442240/AnsiballZ_file.py'
Oct 10 10:04:46 compute-0 sudo[254475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:46 compute-0 python3.9[254477]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:46 compute-0 sudo[254475]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:47.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:04:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:47] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:47 compute-0 ceph-mon[73551]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:48 compute-0 podman[254504]: 2025-10-10 10:04:48.227185474 +0000 UTC m=+0.062795948 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:04:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:04:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:04:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:48.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:49 compute-0 ceph-mon[73551]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:04:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:51 compute-0 ceph-mon[73551]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:52.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:04:52 compute-0 sudo[254668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfkzljlqmklxchqsnkkbsunhqsmlwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090692.4278817-4228-226265166947751/AnsiballZ_getent.py'
Oct 10 10:04:52 compute-0 sudo[254668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:53 compute-0 python3.9[254670]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 10 10:04:53 compute-0 sudo[254668]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:04:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7747 writes, 31K keys, 7747 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7747 writes, 1564 syncs, 4.95 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 695 writes, 1219 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 695 writes, 338 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:04:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:53 compute-0 sudo[254821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljwoquvmjvawlpkhkzgedkgouwrchvlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090693.370892-4252-43129439997531/AnsiballZ_group.py'
Oct 10 10:04:53 compute-0 sudo[254821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:53 compute-0 ceph-mon[73551]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:04:54 compute-0 python3.9[254823]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 10:04:54 compute-0 groupadd[254825]: group added to /etc/group: name=nova, GID=42436
Oct 10 10:04:54 compute-0 groupadd[254825]: group added to /etc/gshadow: name=nova
Oct 10 10:04:54 compute-0 groupadd[254825]: new group: name=nova, GID=42436
Oct 10 10:04:54 compute-0 sudo[254821]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100454 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:55 compute-0 sudo[254981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrbjifuwybgcmdjtbykdiewisohkyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090694.432647-4276-152449240426326/AnsiballZ_user.py'
Oct 10 10:04:55 compute-0 sudo[254981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:55 compute-0 python3.9[254983]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 10:04:55 compute-0 useradd[254985]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 10 10:04:55 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:04:55 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:04:55 compute-0 useradd[254985]: add 'nova' to group 'libvirt'
Oct 10 10:04:55 compute-0 useradd[254985]: add 'nova' to shadow group 'libvirt'
Oct 10 10:04:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100455 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:55 compute-0 sudo[254981]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:55 compute-0 sudo[254993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:55 compute-0 sudo[254993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:55 compute-0 sudo[254993]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:56 compute-0 ceph-mon[73551]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:56 compute-0 ceph-osd[81941]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000044s
Oct 10 10:04:56 compute-0 sshd-session[255043]: Accepted publickey for zuul from 192.168.122.30 port 50958 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:04:56 compute-0 systemd-logind[806]: New session 57 of user zuul.
Oct 10 10:04:56 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 10 10:04:56 compute-0 sshd-session[255043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:04:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:56 compute-0 sshd-session[255046]: Received disconnect from 192.168.122.30 port 50958:11: disconnected by user
Oct 10 10:04:56 compute-0 sshd-session[255046]: Disconnected from user zuul 192.168.122.30 port 50958
Oct 10 10:04:56 compute-0 sshd-session[255043]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:04:56 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct 10 10:04:56 compute-0 systemd-logind[806]: Session 57 logged out. Waiting for processes to exit.
Oct 10 10:04:56 compute-0 systemd-logind[806]: Removed session 57.
Oct 10 10:04:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:04:57.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:04:57 compute-0 python3.9[255197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:04:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:04:57] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:04:57 compute-0 python3.9[255318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090696.755243-4351-131198845729961/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:58 compute-0 ceph-mon[73551]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:04:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:58.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:04:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:04:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:58.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:58 compute-0 python3.9[255469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:04:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:58 compute-0 python3.9[255545]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:59 compute-0 python3.9[255696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:04:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:04:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:00 compute-0 ceph-mon[73551]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:05:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:00.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:00.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:00 compute-0 python3.9[255818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090699.0917165-4351-171848992793546/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:05:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:01 compute-0 python3.9[255969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:02 compute-0 python3.9[256090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090700.6892147-4351-135681469198281/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:02 compute-0 ceph-mon[73551]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:02.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:02 compute-0 python3.9[256241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:03 compute-0 python3.9[256363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090702.2423189-4351-187131142623668/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:04 compute-0 ceph-mon[73551]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:04 compute-0 sudo[256514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pslqtiauyxrlmtgvwsgiavcncdxvdkqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090703.881954-4558-134882583310720/AnsiballZ_file.py'
Oct 10 10:05:04 compute-0 sudo[256514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:04.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:04 compute-0 python3.9[256516]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:04 compute-0 sudo[256514]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:05 compute-0 sudo[256667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuajuvcglhjxcmwgjpfvydurvpiihbou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090704.6918068-4582-201465212955517/AnsiballZ_copy.py'
Oct 10 10:05:05 compute-0 sudo[256667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:05 compute-0 python3.9[256669]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:05 compute-0 sudo[256667]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:05 compute-0 sudo[256819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfgmhtfkmykymarqohgzetwzsayhatvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090705.5789173-4606-152066615415556/AnsiballZ_stat.py'
Oct 10 10:05:05 compute-0 sudo[256819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:06 compute-0 python3.9[256821]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:06 compute-0 ceph-mon[73551]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:06 compute-0 sudo[256819]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:06 compute-0 sudo[256972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxcovhuocredtpmtrhbpcarglpdrlury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090706.3617704-4630-108155240710202/AnsiballZ_stat.py'
Oct 10 10:05:06 compute-0 sudo[256972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:06 compute-0 python3.9[256974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:06 compute-0 sudo[256972]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:07.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:07 compute-0 sudo[257096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btwgcnvpxrtvposdzuxkxyxcuauvxkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090706.3617704-4630-108155240710202/AnsiballZ_copy.py'
Oct 10 10:05:07 compute-0 sudo[257096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:07] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:07] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:07 compute-0 python3.9[257098]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760090706.3617704-4630-108155240710202/.source _original_basename=.y1xws7ro follow=False checksum=40bc05250e9970823b0ea8ee6fd3cf6a0acbd513 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 10 10:05:07 compute-0 sudo[257096]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:08 compute-0 ceph-mon[73551]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:08.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:08 compute-0 python3.9[257251]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:09 compute-0 python3.9[257404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:09 compute-0 podman[257499]: 2025-10-10 10:05:09.902058305 +0000 UTC m=+0.096233006 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:05:10 compute-0 python3.9[257538]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090708.904679-4708-50337863035952/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:10 compute-0 ceph-mon[73551]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:10 compute-0 podman[257546]: 2025-10-10 10:05:10.146384759 +0000 UTC m=+0.071491934 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:05:10 compute-0 podman[257548]: 2025-10-10 10:05:10.195658774 +0000 UTC m=+0.110163501 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:05:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:10.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:10 compute-0 python3.9[257742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:11 compute-0 python3.9[257864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090710.3220322-4753-131957600372962/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:12 compute-0 ceph-mon[73551]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:12 compute-0 sudo[258015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kluobeaungjvgbzmkymbmfjweglbsmoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090712.0052607-4804-274192883696475/AnsiballZ_container_config_data.py'
Oct 10 10:05:12 compute-0 sudo[258015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:12.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:12 compute-0 python3.9[258017]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 10 10:05:12 compute-0 sudo[258015]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:13 compute-0 sudo[258168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onfyottgzzhyyrqycodbffpipzbgbjtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090712.8876467-4831-52032843000503/AnsiballZ_container_config_hash.py'
Oct 10 10:05:13 compute-0 sudo[258168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:13 compute-0 python3.9[258170]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:05:13 compute-0 sudo[258168]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:14 compute-0 ceph-mon[73551]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:14 compute-0 sudo[258321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezglqdoxafitxzmlcodntqewvuyqnco ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090713.906702-4861-202431551022481/AnsiballZ_edpm_container_manage.py'
Oct 10 10:05:14 compute-0 sudo[258321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:14.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:14 compute-0 python3[258323]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:05:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:15 compute-0 ceph-mon[73551]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:15 compute-0 sudo[258362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:15 compute-0 sudo[258362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:15 compute-0 sudo[258362]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:05:16
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.nfs', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:05:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:05:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:05:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:17.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:17 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:17 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:17] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Oct 10 10:05:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:17] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Oct 10 10:05:17 compute-0 ceph-mon[73551]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:17 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:18.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:19 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:19 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:19 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:20 compute-0 podman[258408]: 2025-10-10 10:05:20.045265908 +0000 UTC m=+0.875644190 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:05:20 compute-0 ceph-mon[73551]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:20.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:20.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:21 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:21 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:21 compute-0 ceph-mon[73551]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:21 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:22.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:23 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:23 compute-0 ceph-mon[73551]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:23 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:23 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:25 compute-0 podman[258338]: 2025-10-10 10:05:25.126097996 +0000 UTC m=+10.471269268 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:25 compute-0 podman[258502]: 2025-10-10 10:05:25.252636603 +0000 UTC m=+0.023465859 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:25 compute-0 podman[258502]: 2025-10-10 10:05:25.362276963 +0000 UTC m=+0.133106199 container create 5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init)
Oct 10 10:05:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:25 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:25 compute-0 python3[258323]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 10 10:05:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:25 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:25 compute-0 sudo[258321]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:25 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:26 compute-0 ceph-mon[73551]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:26.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:27 compute-0 sudo[258693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buzfhsqnxewxzqjpcdkxaraiekgmsuzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090726.7405987-4885-246906263317379/AnsiballZ_stat.py'
Oct 10 10:05:27 compute-0 sudo[258693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:27.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:27 compute-0 python3.9[258695]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:27 compute-0 ceph-mon[73551]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:27 compute-0 sudo[258693]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:27 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:27] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Oct 10 10:05:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:27] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Oct 10 10:05:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:27 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:27 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:28 compute-0 sudo[258848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpsmutdtcmsedjenycjvviyyvsiahfgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090728.0739455-4921-216856351786145/AnsiballZ_container_config_data.py'
Oct 10 10:05:28 compute-0 sudo[258848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:28.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:28 compute-0 python3.9[258850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 10 10:05:28 compute-0 sudo[258848]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:29 compute-0 sudo[259001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itmwjkhpgyyenpumlayywtwucydoxsve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090728.9951403-4948-184855737703711/AnsiballZ_container_config_hash.py'
Oct 10 10:05:29 compute-0 sudo[259001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:29 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:29 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:29 compute-0 ceph-mon[73551]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:29 compute-0 python3.9[259003]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:05:29 compute-0 sudo[259001]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:29 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:30.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:30 compute-0 sudo[259154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlifnwxkulxmgfvqaikyetldibzazntj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090730.1976352-4978-207349449500502/AnsiballZ_edpm_container_manage.py'
Oct 10 10:05:30 compute-0 sudo[259154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:30 compute-0 python3[259156]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:05:31 compute-0 podman[259195]: 2025-10-10 10:05:31.111734157 +0000 UTC m=+0.057218600 container create ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 10:05:31 compute-0 podman[259195]: 2025-10-10 10:05:31.082589905 +0000 UTC m=+0.028074398 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:31 compute-0 python3[259156]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 10 10:05:31 compute-0 sudo[259154]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:05:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:31 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:31 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:31 compute-0 ceph-mon[73551]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:31 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:31 compute-0 sudo[259383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvxiequahfgpkgzsejoftxhboncweksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090731.569672-5002-35207224957444/AnsiballZ_stat.py'
Oct 10 10:05:31 compute-0 sudo[259383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:32 compute-0 python3.9[259385]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:32 compute-0 sudo[259383]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:32.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:32 compute-0 sudo[259539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailbsrdluvclnhbwhyzybaxgqfkmcafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090732.5224206-5029-53762626741263/AnsiballZ_file.py'
Oct 10 10:05:32 compute-0 sudo[259539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:33 compute-0 python3.9[259541]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:33 compute-0 sudo[259539]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:33 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:33 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:33 compute-0 sudo[259690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakrlaogflxhuewqfxrzxrmqnrskocei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.101499-5029-24296196066183/AnsiballZ_copy.py'
Oct 10 10:05:33 compute-0 sudo[259690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:33 compute-0 ceph-mon[73551]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:33 compute-0 python3.9[259692]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090733.101499-5029-24296196066183/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:33 compute-0 sudo[259690]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:33 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:34 compute-0 sudo[259767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwflzqkvntbhrqwkafsytvqgothicorl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.101499-5029-24296196066183/AnsiballZ_systemd.py'
Oct 10 10:05:34 compute-0 sudo[259767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:34 compute-0 python3.9[259769]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:05:34 compute-0 systemd[1]: Reloading.
Oct 10 10:05:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:34 compute-0 systemd-rc-local-generator[259790]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:05:34 compute-0 systemd-sysv-generator[259800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:05:35 compute-0 sudo[259767]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:35 compute-0 sudo[259879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgucrszaxytgiirynfppllnhhfazmwio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.101499-5029-24296196066183/AnsiballZ_systemd.py'
Oct 10 10:05:35 compute-0 sudo[259879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:35 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:35 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:35 compute-0 python3.9[259881]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:05:35 compute-0 sudo[259882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:35 compute-0 sudo[259882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:35 compute-0 sudo[259882]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:35 compute-0 systemd[1]: Reloading.
Oct 10 10:05:35 compute-0 systemd-rc-local-generator[259936]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:05:35 compute-0 ceph-mon[73551]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:35 compute-0 systemd-sysv-generator[259939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:05:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:35 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:36 compute-0 systemd[1]: Starting nova_compute container...
Oct 10 10:05:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-0 podman[259946]: 2025-10-10 10:05:36.18525989 +0000 UTC m=+0.100498667 container init ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 10 10:05:36 compute-0 podman[259946]: 2025-10-10 10:05:36.193570318 +0000 UTC m=+0.108809065 container start ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 10 10:05:36 compute-0 podman[259946]: nova_compute
Oct 10 10:05:36 compute-0 nova_compute[259962]: + sudo -E kolla_set_configs
Oct 10 10:05:36 compute-0 systemd[1]: Started nova_compute container.
Oct 10 10:05:36 compute-0 sudo[259879]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Validating config file
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying service configuration files
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Deleting /etc/ceph
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Creating directory /etc/ceph
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Writing out command to execute
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-0 nova_compute[259962]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-0 nova_compute[259962]: ++ cat /run_command
Oct 10 10:05:36 compute-0 nova_compute[259962]: + CMD=nova-compute
Oct 10 10:05:36 compute-0 nova_compute[259962]: + ARGS=
Oct 10 10:05:36 compute-0 nova_compute[259962]: + sudo kolla_copy_cacerts
Oct 10 10:05:36 compute-0 nova_compute[259962]: + [[ ! -n '' ]]
Oct 10 10:05:36 compute-0 nova_compute[259962]: + . kolla_extend_start
Oct 10 10:05:36 compute-0 nova_compute[259962]: Running command: 'nova-compute'
Oct 10 10:05:36 compute-0 nova_compute[259962]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 10:05:36 compute-0 nova_compute[259962]: + umask 0022
Oct 10 10:05:36 compute-0 nova_compute[259962]: + exec nova-compute
Oct 10 10:05:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:36.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:37 compute-0 python3.9[260125]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:37 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:37] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 10 10:05:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:37] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 10 10:05:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:37 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:37 compute-0 ceph-mon[73551]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:37 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:38 compute-0 python3.9[260275]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.379 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.379 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.380 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.380 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 10 10:05:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:38.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.531 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:38 compute-0 nova_compute[259962]: 2025-10-10 10:05:38.564 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.084 2 INFO nova.virt.driver [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 10 10:05:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.242 2 INFO nova.compute.provider_config [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.258 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.259 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.259 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.259 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.259 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.259 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.260 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.261 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.262 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.263 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.264 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.265 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.266 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.267 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.268 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.269 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.270 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.271 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.271 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.271 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.271 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.271 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.272 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.273 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.274 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.275 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.276 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.277 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.277 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.277 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.277 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.277 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.278 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.279 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.280 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.281 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.282 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.283 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.284 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.285 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.286 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.287 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.287 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.287 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.287 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.287 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.288 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.289 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.289 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.289 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.289 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.289 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.290 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.291 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.292 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.292 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.292 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.292 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.293 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.293 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 sudo[260432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.293 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.294 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.294 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.294 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.294 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.294 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.295 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.295 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.295 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.295 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.295 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.296 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.296 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.296 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.297 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.297 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.297 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.297 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.297 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.298 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.298 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.298 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.298 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.298 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.299 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.300 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.300 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.300 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.300 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.300 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 sudo[260432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.301 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.302 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.303 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.304 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 sudo[260432]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.305 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.306 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.307 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.308 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.309 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.309 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.309 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.309 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.309 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.310 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.311 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.311 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.311 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.311 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.311 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.312 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.313 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.314 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.315 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.316 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.316 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.316 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.316 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.316 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.317 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.317 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.317 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.317 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.317 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.318 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.318 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.318 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.319 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.320 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.321 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.322 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.323 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.324 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.325 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.326 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.327 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.328 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.329 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.330 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.331 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.332 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.333 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.333 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.333 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.333 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.334 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.334 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.334 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.334 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.334 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.335 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.335 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.335 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.335 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.335 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.336 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.336 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.336 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.336 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.337 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.337 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.337 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.337 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.337 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.338 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.339 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.339 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.339 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.339 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.339 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.340 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.340 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.340 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.340 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.340 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.341 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.341 2 WARNING oslo_config.cfg [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 10:05:39 compute-0 nova_compute[259962]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 10:05:39 compute-0 nova_compute[259962]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 10:05:39 compute-0 nova_compute[259962]: and ``live_migration_inbound_addr`` respectively.
Oct 10 10:05:39 compute-0 nova_compute[259962]: ).  Its value may be silently ignored in the future.
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.341 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.341 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.341 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.342 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.343 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.344 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.345 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.346 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.347 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.348 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.348 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.348 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.348 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.348 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.349 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.349 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.349 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.349 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.349 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.350 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.350 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.350 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.350 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.350 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.351 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.351 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.351 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.351 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.351 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.352 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.352 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.352 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.352 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.352 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.353 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.353 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.353 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.353 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.353 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.354 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.354 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.354 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.354 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.354 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.355 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.355 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.355 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.355 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.355 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.356 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.356 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.356 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.356 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.356 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.357 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.358 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.358 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.358 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.358 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.358 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.359 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.360 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.360 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.360 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.360 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.360 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.361 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.361 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.361 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.361 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.361 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.362 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.363 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.363 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.363 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.363 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.364 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.364 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.364 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.364 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.364 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.365 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.366 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.366 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.366 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.366 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.367 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.367 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.367 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.367 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.367 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.368 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.368 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.368 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.368 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.368 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.369 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.369 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 sudo[260457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.370 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.370 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.370 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.371 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.371 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.371 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.371 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.372 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 sudo[260457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.372 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.372 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.372 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.373 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.373 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.373 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.373 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.373 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.374 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.374 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.374 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.374 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.374 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.375 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.375 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.375 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.375 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.375 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.376 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.376 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.376 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.376 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.377 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.378 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.378 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.378 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.378 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.378 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.379 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.379 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.379 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.379 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.380 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.381 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.382 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.383 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.384 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.385 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.386 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.386 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.386 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.386 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.387 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.388 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.388 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.388 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.388 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.388 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.389 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.390 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.391 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.392 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.393 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.394 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.395 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.395 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.395 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.395 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.395 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.396 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.397 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.398 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.399 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.400 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.401 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.402 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 python3.9[260431]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.403 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.404 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.405 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.406 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.407 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.408 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.409 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.410 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.411 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.411 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.411 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.411 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.411 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.412 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.413 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.414 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.415 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.415 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.415 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.415 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.415 2 DEBUG oslo_service.service [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.416 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.436 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.437 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.437 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.437 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 10 10:05:39 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 10:05:39 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.517 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff46d8ea7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.520 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff46d8ea7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.521 2 INFO nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Connection event '1' reason 'None'
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.534 2 WARNING nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 10 10:05:39 compute-0 nova_compute[259962]: 2025-10-10 10:05:39.535 2 DEBUG nova.virt.libvirt.volume.mount [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 10 10:05:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:05:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:05:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:39 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:39 compute-0 ceph-mon[73551]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-0 sudo[260457]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:40 compute-0 podman[260666]: 2025-10-10 10:05:40.098141153 +0000 UTC m=+0.063506823 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 10 10:05:40 compute-0 sudo[260743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joyaxrypncsbbqfvqiseavjjeqjraxhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090739.861256-5209-20874100353347/AnsiballZ_podman_container.py'
Oct 10 10:05:40 compute-0 sudo[260743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:40 compute-0 podman[260745]: 2025-10-10 10:05:40.266163399 +0000 UTC m=+0.080924434 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.340 2 INFO nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]: 
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <host>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <uuid>4cd1b6a8-cc36-4bf3-aa73-609f8f6b6f5b</uuid>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <arch>x86_64</arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <microcode version='16777317'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <signature family='23' model='49' stepping='0'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='x2apic'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='tsc-deadline'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='osxsave'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='hypervisor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='tsc_adjust'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='spec-ctrl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='stibp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='arch-capabilities'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='cmp_legacy'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='topoext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='virt-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='lbrv'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='tsc-scale'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='vmcb-clean'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='pause-filter'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='pfthreshold'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='svme-addr-chk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='rdctl-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='mds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature name='pschange-mc-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <pages unit='KiB' size='4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <pages unit='KiB' size='2048'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <pages unit='KiB' size='1048576'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <power_management>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <suspend_mem/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </power_management>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <iommu support='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <migration_features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <live/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <uri_transports>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <uri_transport>tcp</uri_transport>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <uri_transport>rdma</uri_transport>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </uri_transports>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </migration_features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <topology>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <cells num='1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <cell id='0'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <memory unit='KiB'>7864352</memory>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <pages unit='KiB' size='4'>1966088</pages>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <pages unit='KiB' size='2048'>0</pages>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <distances>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <sibling id='0' value='10'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           </distances>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           <cpus num='8'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:           </cpus>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         </cell>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </cells>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </topology>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <cache>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </cache>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <secmodel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model>selinux</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <doi>0</doi>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </secmodel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <secmodel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model>dac</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <doi>0</doi>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </secmodel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </host>
Oct 10 10:05:40 compute-0 nova_compute[259962]: 
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <guest>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <os_type>hvm</os_type>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <arch name='i686'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <wordsize>32</wordsize>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <domain type='qemu'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <domain type='kvm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <pae/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <nonpae/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <apic default='on' toggle='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <cpuselection/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <deviceboot/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <externalSnapshot/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </guest>
Oct 10 10:05:40 compute-0 nova_compute[259962]: 
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <guest>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <os_type>hvm</os_type>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <arch name='x86_64'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <wordsize>64</wordsize>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <domain type='qemu'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <domain type='kvm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <apic default='on' toggle='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <cpuselection/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <deviceboot/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <externalSnapshot/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </guest>
Oct 10 10:05:40 compute-0 nova_compute[259962]: 
Oct 10 10:05:40 compute-0 nova_compute[259962]: </capabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]: 
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.351 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:05:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.383 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 10:05:40 compute-0 nova_compute[259962]: <domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <arch>i686</arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <vcpu max='4096'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <os supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <loader supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>rom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pflash</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='readonly'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>yes</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='secure'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </loader>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </os>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-0 python3.9[260746]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 sudo[260796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-0 sudo[260796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-0 sudo[260796]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 podman[260765]: 2025-10-10 10:05:40.418181207 +0000 UTC m=+0.128369696 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>file</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>anonymous</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>memfd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </memoryBacking>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <disk supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>disk</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cdrom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>floppy</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>lun</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>fdc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>sata</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </disk>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vnc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>dbus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </graphics>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <video supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='modelType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vga</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cirrus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>none</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>bochs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ramfb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </video>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='mode'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>subsystem</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>mandatory</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>requisite</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>optional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pci</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hostdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <rng supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>random</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </rng>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='driverType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>path</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>handle</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </filesystem>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emulator</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>external</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>2.0</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </tpm>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </redirdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <channel supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pty</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>unix</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </channel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>qemu</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </crypto>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <interface supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>passt</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </interface>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <panic supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>isa</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>hyperv</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </panic>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <gic supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sev supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='features'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>relaxed</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vapic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vpindex</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>runtime</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>synic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>stimer</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reset</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>frequencies</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ipi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>avic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hyperv>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]: </domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.389 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 10:05:40 compute-0 nova_compute[259962]: <domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <arch>i686</arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <vcpu max='240'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <os supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <loader supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>rom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pflash</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='readonly'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>yes</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='secure'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </loader>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </os>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:40.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>file</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>anonymous</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>memfd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </memoryBacking>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <disk supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>disk</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cdrom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>floppy</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>lun</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ide</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>fdc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>sata</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </disk>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vnc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>dbus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </graphics>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <video supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='modelType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vga</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cirrus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>none</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>bochs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ramfb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </video>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='mode'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>subsystem</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>mandatory</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>requisite</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>optional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pci</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hostdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <rng supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>random</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </rng>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='driverType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>path</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>handle</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </filesystem>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emulator</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>external</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>2.0</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </tpm>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </redirdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <channel supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pty</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>unix</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </channel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>qemu</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </crypto>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <interface supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>passt</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </interface>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <panic supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>isa</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>hyperv</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </panic>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <gic supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-0 sudo[260831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sev supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='features'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>relaxed</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vapic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vpindex</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>runtime</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>synic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>stimer</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reset</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>frequencies</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ipi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>avic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hyperv>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]: </domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.415 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.419 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 10:05:40 compute-0 nova_compute[259962]: <domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <arch>x86_64</arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <vcpu max='4096'/>
Oct 10 10:05:40 compute-0 sudo[260831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <os supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='firmware'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>efi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <loader supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>rom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pflash</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='readonly'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>yes</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='secure'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>yes</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </loader>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </os>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 sudo[260743]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>file</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>anonymous</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>memfd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </memoryBacking>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <disk supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>disk</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cdrom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>floppy</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>lun</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>fdc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>sata</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </disk>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vnc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>dbus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </graphics>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <video supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='modelType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vga</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cirrus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>none</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>bochs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ramfb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </video>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='mode'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>subsystem</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>mandatory</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>requisite</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>optional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pci</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hostdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <rng supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>random</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </rng>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='driverType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>path</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>handle</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </filesystem>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emulator</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>external</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>2.0</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </tpm>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </redirdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <channel supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pty</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>unix</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </channel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>qemu</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </crypto>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <interface supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>passt</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </interface>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <panic supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>isa</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>hyperv</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </panic>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <gic supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sev supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='features'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>relaxed</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vapic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vpindex</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>runtime</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>synic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>stimer</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reset</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>frequencies</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ipi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>avic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hyperv>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]: </domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.486 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 10:05:40 compute-0 nova_compute[259962]: <domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <arch>x86_64</arch>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <vcpu max='240'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <os supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <loader supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>rom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pflash</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='readonly'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>yes</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='secure'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>no</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </loader>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </os>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>on</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>off</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xop'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='la57'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='hle'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='pku'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='erms'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='ss'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </blockers>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </mode>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </cpu>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>file</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>anonymous</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <value>memfd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </memoryBacking>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <disk supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>disk</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cdrom</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>floppy</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>lun</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ide</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>fdc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>sata</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </disk>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vnc</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>dbus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </graphics>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <video supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='modelType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vga</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>cirrus</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>none</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>bochs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ramfb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </video>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='mode'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>subsystem</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>mandatory</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>requisite</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>optional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pci</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>scsi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hostdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <rng supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>random</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>egd</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </rng>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='driverType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>path</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>handle</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </filesystem>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emulator</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>external</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>2.0</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </tpm>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='bus'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>usb</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </redirdev>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <channel supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>pty</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>unix</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </channel>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='type'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>qemu</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>builtin</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </crypto>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <interface supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='backendType'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>default</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>passt</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </interface>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <panic supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='model'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>isa</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>hyperv</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </panic>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </devices>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   <features>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <gic supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sev supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       <enum name='features'>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>relaxed</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vapic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vpindex</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>runtime</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>synic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>stimer</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reset</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>frequencies</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>ipi</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>avic</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-0 nova_compute[259962]:       </enum>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     </hyperv>
Oct 10 10:05:40 compute-0 nova_compute[259962]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-0 nova_compute[259962]:   </features>
Oct 10 10:05:40 compute-0 nova_compute[259962]: </domainCapabilities>
Oct 10 10:05:40 compute-0 nova_compute[259962]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.543 2 DEBUG nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.544 2 INFO nova.virt.libvirt.host [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Secure Boot support detected
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.546 2 INFO nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.546 2 INFO nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.555 2 DEBUG nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.586 2 INFO nova.virt.node [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Determined node identity 5b1ab6df-62aa-4a93-8e24-04440191f108 from /var/lib/nova/compute_id
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.605 2 WARNING nova.compute.manager [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Compute nodes ['5b1ab6df-62aa-4a93-8e24-04440191f108'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.629 2 INFO nova.compute.manager [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.685 2 WARNING nova.compute.manager [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.685 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.686 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.686 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.686 2 DEBUG nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:05:40 compute-0 nova_compute[259962]: 2025-10-10 10:05:40.686 2 DEBUG oslo_concurrency.processutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:05:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-0 podman[260972]: 2025-10-10 10:05:40.926301907 +0000 UTC m=+0.045101967 container create 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:05:40 compute-0 systemd[1]: Started libpod-conmon-2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b.scope.
Oct 10 10:05:40 compute-0 podman[260972]: 2025-10-10 10:05:40.902341363 +0000 UTC m=+0.021141453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:41 compute-0 podman[260972]: 2025-10-10 10:05:41.020111506 +0000 UTC m=+0.138911586 container init 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:05:41 compute-0 podman[260972]: 2025-10-10 10:05:41.029160889 +0000 UTC m=+0.147960949 container start 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:41 compute-0 podman[260972]: 2025-10-10 10:05:41.032357342 +0000 UTC m=+0.151157402 container attach 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:05:41 compute-0 relaxed_bartik[261024]: 167 167
Oct 10 10:05:41 compute-0 systemd[1]: libpod-2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b.scope: Deactivated successfully.
Oct 10 10:05:41 compute-0 conmon[261024]: conmon 2a9cde532aeed66be485 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b.scope/container/memory.events
Oct 10 10:05:41 compute-0 podman[260972]: 2025-10-10 10:05:41.044571617 +0000 UTC m=+0.163371677 container died 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c33b6474c6d17ff3d377ce46c2358d82ef9e496b29575a40f03bdef6da22353-merged.mount: Deactivated successfully.
Oct 10 10:05:41 compute-0 podman[260972]: 2025-10-10 10:05:41.095426419 +0000 UTC m=+0.214226479 container remove 2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:05:41 compute-0 systemd[1]: libpod-conmon-2a9cde532aeed66be485308726aeb421286f7d446814d22453930fffa300ab6b.scope: Deactivated successfully.
Oct 10 10:05:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:41 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003042065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.171 2 DEBUG oslo_concurrency.processutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:41 compute-0 sudo[261107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnqrcgxrwuwdcagwcdmoavqnkzfqkdup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090740.8497353-5233-37826539009045/AnsiballZ_systemd.py'
Oct 10 10:05:41 compute-0 sudo[261107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:41 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 10:05:41 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.295069506 +0000 UTC m=+0.059295666 container create 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:05:41 compute-0 systemd[1]: Started libpod-conmon-73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554.scope.
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.268084395 +0000 UTC m=+0.032310535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:41 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.398361722 +0000 UTC m=+0.162587842 container init 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.406139954 +0000 UTC m=+0.170366074 container start 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.412611612 +0000 UTC m=+0.176837732 container attach 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:41 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:41 compute-0 python3.9[261109]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.522 2 WARNING nova.virt.libvirt.driver [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.523 2 DEBUG nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4906MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.524 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.524 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.542 2 WARNING nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] No compute node record for compute-0.ctlplane.example.com:5b1ab6df-62aa-4a93-8e24-04440191f108: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 5b1ab6df-62aa-4a93-8e24-04440191f108 could not be found.
Oct 10 10:05:41 compute-0 systemd[1]: Stopping nova_compute container...
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.565 2 INFO nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 5b1ab6df-62aa-4a93-8e24-04440191f108
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.606 2 DEBUG nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.606 2 DEBUG nova.compute.resource_tracker [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.611 2 DEBUG oslo_concurrency.lockutils [None req-39c0a399-d126-4513-84eb-04c56da29e7c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.612 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.612 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:41 compute-0 nova_compute[259962]: 2025-10-10 10:05:41.612 2 DEBUG oslo_concurrency.lockutils [None req-e9ec3ec9-05c0-4d6e-8646-65bc12b87f3a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:41 compute-0 loving_agnesi[261151]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:05:41 compute-0 loving_agnesi[261151]: --> All data devices are unavailable
Oct 10 10:05:41 compute-0 systemd[1]: libpod-73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554.scope: Deactivated successfully.
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.784864513 +0000 UTC m=+0.549090633 container died 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d1ceba133068ab209b75da4656182845a55558c1650cf010ef501135dd5cbc-merged.mount: Deactivated successfully.
Oct 10 10:05:41 compute-0 podman[261116]: 2025-10-10 10:05:41.834092874 +0000 UTC m=+0.598318994 container remove 73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:41 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:41 compute-0 systemd[1]: libpod-conmon-73b19bcebaab2eb2d6f6c540a3038510e0d3b6ea960672fd183fff7bf187f554.scope: Deactivated successfully.
Oct 10 10:05:41 compute-0 ceph-mon[73551]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3983368344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1003042065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2390731396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-0 sudo[260831]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:05:41.891 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:05:41.892 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:05:41.892 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:41 compute-0 sudo[261196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:05:41 compute-0 sudo[261196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:41 compute-0 sudo[261196]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:42 compute-0 sudo[261221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:05:42 compute-0 sudo[261221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:42 compute-0 virtqemud[260504]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 10 10:05:42 compute-0 virtqemud[260504]: hostname: compute-0
Oct 10 10:05:42 compute-0 virtqemud[260504]: End of file while reading data: Input/output error
Oct 10 10:05:42 compute-0 systemd[1]: libpod-ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f.scope: Deactivated successfully.
Oct 10 10:05:42 compute-0 systemd[1]: libpod-ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f.scope: Consumed 3.791s CPU time.
Oct 10 10:05:42 compute-0 podman[261161]: 2025-10-10 10:05:42.074250399 +0000 UTC m=+0.499807032 container died ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f-userdata-shm.mount: Deactivated successfully.
Oct 10 10:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a-merged.mount: Deactivated successfully.
Oct 10 10:05:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:42.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:42 compute-0 podman[261161]: 2025-10-10 10:05:42.595189472 +0000 UTC m=+1.020746105 container cleanup ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:42 compute-0 podman[261161]: nova_compute
Oct 10 10:05:42 compute-0 podman[261273]: nova_compute
Oct 10 10:05:42 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 10 10:05:42 compute-0 systemd[1]: Stopped nova_compute container.
Oct 10 10:05:42 compute-0 systemd[1]: Starting nova_compute container...
Oct 10 10:05:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75122bba71bae1a039ffa0a4a7f5f91a23fa4bd517cf9457f5f805d89b47f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-0 podman[261298]: 2025-10-10 10:05:42.808603404 +0000 UTC m=+0.121647990 container init ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Oct 10 10:05:42 compute-0 podman[261298]: 2025-10-10 10:05:42.816725077 +0000 UTC m=+0.129769643 container start ba8a5ce2e1654e7f449070da282769ba493abfe21e54ced0e871fe39adcd7b4f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute, io.buildah.version=1.41.3)
Oct 10 10:05:42 compute-0 nova_compute[261329]: + sudo -E kolla_set_configs
Oct 10 10:05:42 compute-0 podman[261325]: 2025-10-10 10:05:42.820819418 +0000 UTC m=+0.067225822 container create 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:42 compute-0 podman[261298]: nova_compute
Oct 10 10:05:42 compute-0 systemd[1]: Started nova_compute container.
Oct 10 10:05:42 compute-0 sudo[261107]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:42 compute-0 podman[261325]: 2025-10-10 10:05:42.78154885 +0000 UTC m=+0.027955274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Validating config file
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying service configuration files
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /etc/ceph
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Creating directory /etc/ceph
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Writing out command to execute
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-0 nova_compute[261329]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-0 systemd[1]: Started libpod-conmon-0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f.scope.
Oct 10 10:05:42 compute-0 nova_compute[261329]: ++ cat /run_command
Oct 10 10:05:42 compute-0 nova_compute[261329]: + CMD=nova-compute
Oct 10 10:05:42 compute-0 nova_compute[261329]: + ARGS=
Oct 10 10:05:42 compute-0 nova_compute[261329]: + sudo kolla_copy_cacerts
Oct 10 10:05:42 compute-0 nova_compute[261329]: + [[ ! -n '' ]]
Oct 10 10:05:42 compute-0 nova_compute[261329]: + . kolla_extend_start
Oct 10 10:05:42 compute-0 nova_compute[261329]: Running command: 'nova-compute'
Oct 10 10:05:42 compute-0 nova_compute[261329]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 10:05:42 compute-0 nova_compute[261329]: + umask 0022
Oct 10 10:05:42 compute-0 nova_compute[261329]: + exec nova-compute
Oct 10 10:05:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:42 compute-0 podman[261325]: 2025-10-10 10:05:42.985485776 +0000 UTC m=+0.231892200 container init 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:05:42 compute-0 podman[261325]: 2025-10-10 10:05:42.994709124 +0000 UTC m=+0.241115518 container start 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:43 compute-0 podman[261325]: 2025-10-10 10:05:43.001157522 +0000 UTC m=+0.247563916 container attach 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:43 compute-0 ecstatic_pare[261366]: 167 167
Oct 10 10:05:43 compute-0 systemd[1]: libpod-0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f.scope: Deactivated successfully.
Oct 10 10:05:43 compute-0 conmon[261366]: conmon 0406e5d21207aa74391b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f.scope/container/memory.events
Oct 10 10:05:43 compute-0 podman[261325]: 2025-10-10 10:05:43.003169257 +0000 UTC m=+0.249575661 container died 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a74e38ec7616a76a162cc5f1a4b6b539bca4fdca5c9436be317bf3e0976fa5-merged.mount: Deactivated successfully.
Oct 10 10:05:43 compute-0 podman[261325]: 2025-10-10 10:05:43.184976718 +0000 UTC m=+0.431383112 container remove 0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:05:43 compute-0 systemd[1]: libpod-conmon-0406e5d21207aa74391bf91417e92bed786f02792d291c745c9ab783a30bc94f.scope: Deactivated successfully.
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.354565776 +0000 UTC m=+0.048919392 container create 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:43 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:43 compute-0 systemd[1]: Started libpod-conmon-78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb.scope.
Oct 10 10:05:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:43 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fcce542a096d1ad13ec3744e3de304cf12892f34eba923a2624dbe78186a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fcce542a096d1ad13ec3744e3de304cf12892f34eba923a2624dbe78186a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fcce542a096d1ad13ec3744e3de304cf12892f34eba923a2624dbe78186a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fcce542a096d1ad13ec3744e3de304cf12892f34eba923a2624dbe78186a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.331913494 +0000 UTC m=+0.026267110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.439698694 +0000 UTC m=+0.134052320 container init 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.445635356 +0000 UTC m=+0.139988972 container start 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.448637633 +0000 UTC m=+0.142991249 container attach 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]: {
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:     "0": [
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:         {
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "devices": [
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "/dev/loop3"
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             ],
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "lv_name": "ceph_lv0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "lv_size": "21470642176",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "name": "ceph_lv0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "tags": {
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.cluster_name": "ceph",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.crush_device_class": "",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.encrypted": "0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.osd_id": "0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.type": "block",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.vdo": "0",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:                 "ceph.with_tpm": "0"
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             },
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "type": "block",
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:             "vg_name": "ceph_vg0"
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:         }
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]:     ]
Oct 10 10:05:43 compute-0 naughty_torvalds[261422]: }
Oct 10 10:05:43 compute-0 systemd[1]: libpod-78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb.scope: Deactivated successfully.
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.737465701 +0000 UTC m=+0.431819337 container died 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-46fcce542a096d1ad13ec3744e3de304cf12892f34eba923a2624dbe78186a61-merged.mount: Deactivated successfully.
Oct 10 10:05:43 compute-0 podman[261406]: 2025-10-10 10:05:43.786508294 +0000 UTC m=+0.480861930 container remove 78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 10:05:43 compute-0 systemd[1]: libpod-conmon-78a76143528212e521cff68d1060f6a6f80e37c46ac3831fbd9040fcd439c2cb.scope: Deactivated successfully.
Oct 10 10:05:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:43 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:43 compute-0 sudo[261221]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:43 compute-0 sudo[261468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:05:43 compute-0 sudo[261468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:43 compute-0 sudo[261468]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:43 compute-0 sudo[261522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:05:43 compute-0 sudo[261522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:44 compute-0 ceph-mon[73551]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:44 compute-0 sudo[261621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvcwgdnxbntayqwhoymlnaelnicshemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090743.8566804-5260-237437516530857/AnsiballZ_podman_container.py'
Oct 10 10:05:44 compute-0 sudo[261621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.383008437 +0000 UTC m=+0.049787258 container create 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:05:44 compute-0 systemd[1]: Started libpod-conmon-27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464.scope.
Oct 10 10:05:44 compute-0 python3.9[261625]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:05:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:44.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.359289172 +0000 UTC m=+0.026068043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.466546425 +0000 UTC m=+0.133325286 container init 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.472970173 +0000 UTC m=+0.139748994 container start 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:05:44 compute-0 condescending_wescoff[261682]: 167 167
Oct 10 10:05:44 compute-0 systemd[1]: libpod-27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464.scope: Deactivated successfully.
Oct 10 10:05:44 compute-0 conmon[261682]: conmon 27913ed7a3626d71f24a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464.scope/container/memory.events
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.484528476 +0000 UTC m=+0.151307367 container attach 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.48495148 +0000 UTC m=+0.151730311 container died 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:05:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-adbf1dce4957608584340017591c9f23c3bf17169b5779c428ccedf00c799287-merged.mount: Deactivated successfully.
Oct 10 10:05:44 compute-0 podman[261666]: 2025-10-10 10:05:44.525284582 +0000 UTC m=+0.192063403 container remove 27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wescoff, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:44 compute-0 systemd[1]: libpod-conmon-27913ed7a3626d71f24afe80f8419992cf29d1dcbc1506b915688a87afe1c464.scope: Deactivated successfully.
Oct 10 10:05:44 compute-0 systemd[1]: Started libpod-conmon-5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54.scope.
Oct 10 10:05:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aa6ef0c0570b76944acfcac85a1a7b077a13b0e6f734ffa6b430e5f96a0fb7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aa6ef0c0570b76944acfcac85a1a7b077a13b0e6f734ffa6b430e5f96a0fb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aa6ef0c0570b76944acfcac85a1a7b077a13b0e6f734ffa6b430e5f96a0fb7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 podman[261723]: 2025-10-10 10:05:44.657658797 +0000 UTC m=+0.102185830 container init 5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 10:05:44 compute-0 podman[261723]: 2025-10-10 10:05:44.665619704 +0000 UTC m=+0.110146727 container start 5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001)
Oct 10 10:05:44 compute-0 python3.9[261625]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 10 10:05:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Applying nova statedir ownership
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 10 10:05:44 compute-0 nova_compute_init[261762]: INFO:nova_statedir:Nova statedir ownership complete
Oct 10 10:05:44 compute-0 podman[261747]: 2025-10-10 10:05:44.714122711 +0000 UTC m=+0.056142514 container create 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:05:44 compute-0 systemd[1]: libpod-5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54.scope: Deactivated successfully.
Oct 10 10:05:44 compute-0 podman[261763]: 2025-10-10 10:05:44.735577103 +0000 UTC m=+0.040430776 container died 5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:44 compute-0 systemd[1]: Started libpod-conmon-73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6.scope.
Oct 10 10:05:44 compute-0 podman[261747]: 2025-10-10 10:05:44.692294506 +0000 UTC m=+0.034314329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:05:44 compute-0 podman[261774]: 2025-10-10 10:05:44.788154551 +0000 UTC m=+0.057841599 container cleanup 5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 10 10:05:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:05:44 compute-0 systemd[1]: libpod-conmon-5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54.scope: Deactivated successfully.
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a71b735449c6e6338eb1a8276302cf651b2dedf060f59031fc9b76d74af0a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a71b735449c6e6338eb1a8276302cf651b2dedf060f59031fc9b76d74af0a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a71b735449c6e6338eb1a8276302cf651b2dedf060f59031fc9b76d74af0a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a71b735449c6e6338eb1a8276302cf651b2dedf060f59031fc9b76d74af0a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-0 podman[261747]: 2025-10-10 10:05:44.813663955 +0000 UTC m=+0.155683748 container init 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:44 compute-0 podman[261747]: 2025-10-10 10:05:44.823625107 +0000 UTC m=+0.165644900 container start 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:44 compute-0 podman[261747]: 2025-10-10 10:05:44.826649275 +0000 UTC m=+0.168669068 container attach 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:05:44 compute-0 sudo[261621]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.059 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.060 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.060 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.060 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.212 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.228 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:45 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-29aa6ef0c0570b76944acfcac85a1a7b077a13b0e6f734ffa6b430e5f96a0fb7-merged.mount: Deactivated successfully.
Oct 10 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b9ab4a29e53eba9c2e27e14d07ffb041ed3bae6b7580debe47d55c312060f54-userdata-shm.mount: Deactivated successfully.
Oct 10 10:05:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:45 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:45 compute-0 lvm[261913]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:05:45 compute-0 lvm[261913]: VG ceph_vg0 finished
Oct 10 10:05:45 compute-0 kind_mendel[261797]: {}
Oct 10 10:05:45 compute-0 systemd[1]: libpod-73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6.scope: Deactivated successfully.
Oct 10 10:05:45 compute-0 systemd[1]: libpod-73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6.scope: Consumed 1.186s CPU time.
Oct 10 10:05:45 compute-0 podman[261747]: 2025-10-10 10:05:45.5749485 +0000 UTC m=+0.916968293 container died 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a71b735449c6e6338eb1a8276302cf651b2dedf060f59031fc9b76d74af0a8-merged.mount: Deactivated successfully.
Oct 10 10:05:45 compute-0 podman[261747]: 2025-10-10 10:05:45.618176796 +0000 UTC m=+0.960196589 container remove 73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:05:45 compute-0 systemd[1]: libpod-conmon-73e05a48a95ab745db557e863713f5f207d0723039249e9c486a59b196c650c6.scope: Deactivated successfully.
Oct 10 10:05:45 compute-0 sudo[261522]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:05:45 compute-0 sshd-session[223302]: Connection closed by 192.168.122.30 port 52458
Oct 10 10:05:45 compute-0 sshd-session[223299]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:05:45 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 10 10:05:45 compute-0 systemd[1]: session-55.scope: Consumed 3min 136ms CPU time.
Oct 10 10:05:45 compute-0 systemd-logind[806]: Session 55 logged out. Waiting for processes to exit.
Oct 10 10:05:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:45 compute-0 systemd-logind[806]: Removed session 55.
Oct 10 10:05:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:05:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:45 compute-0 sudo[261930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.746 2 INFO nova.virt.driver [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 10 10:05:45 compute-0 sudo[261930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:45 compute-0 sudo[261930]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:45 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.840 2 INFO nova.compute.provider_config [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.857 2 DEBUG oslo_concurrency.lockutils [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.858 2 DEBUG oslo_concurrency.lockutils [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.858 2 DEBUG oslo_concurrency.lockutils [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.858 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.858 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.859 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.860 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.861 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.862 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.863 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.863 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.863 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.863 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.864 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.865 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.865 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.865 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.865 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.865 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.866 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.866 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.866 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.866 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.866 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.867 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.867 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.867 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.867 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.867 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.868 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.869 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.869 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.869 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.869 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.869 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.870 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.871 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.872 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.873 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.874 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.875 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.876 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.877 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.878 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.879 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.880 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.881 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.882 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.883 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.884 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.885 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.886 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.887 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.888 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.889 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.890 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.891 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.891 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.891 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.891 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.891 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.892 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.893 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.894 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.895 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.896 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.897 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.898 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.899 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.900 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.901 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.902 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.903 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.904 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.905 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.906 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.907 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.908 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.909 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.910 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.911 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.912 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.913 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.914 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.915 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.916 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.917 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.918 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.919 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.920 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.921 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.922 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.923 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.924 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.925 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.926 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.927 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.928 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.928 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.928 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.928 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.928 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.929 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.930 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.931 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.932 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.932 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.932 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.932 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.933 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.933 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.933 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.941 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.941 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.941 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.941 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 WARNING oslo_config.cfg [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 10:05:45 compute-0 nova_compute[261329]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 10:05:45 compute-0 nova_compute[261329]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 10:05:45 compute-0 nova_compute[261329]: and ``live_migration_inbound_addr`` respectively.
Oct 10 10:05:45 compute-0 nova_compute[261329]: ).  Its value may be silently ignored in the future.
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.942 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.943 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.944 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.945 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.946 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.947 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.948 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.949 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.950 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.951 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.952 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.953 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.954 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.955 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.956 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.957 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.958 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.959 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.960 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.961 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.962 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.963 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.964 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.965 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.966 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.967 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.968 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.969 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.970 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.971 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.972 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.973 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.974 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.975 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.976 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.977 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.978 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.979 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.979 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.979 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.979 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.979 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.980 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.981 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.982 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.983 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.984 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.985 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.986 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.987 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.987 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.987 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.987 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.987 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.988 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.989 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.990 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.991 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.992 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.993 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.994 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.995 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.996 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.997 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.998 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:45.999 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.000 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.001 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.002 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.003 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.003 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.003 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.003 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.003 2 DEBUG oslo_service.service [None req-feb95251-d02f-4575-b24c-f79247b183cd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.004 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.019 2 INFO nova.virt.node [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Determined node identity 5b1ab6df-62aa-4a93-8e24-04440191f108 from /var/lib/nova/compute_id
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.019 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.020 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.020 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.021 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.032 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f05efe0d4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.034 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f05efe0d4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.035 2 INFO nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Connection event '1' reason 'None'
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.041 2 INFO nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]: 
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <host>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <uuid>4cd1b6a8-cc36-4bf3-aa73-609f8f6b6f5b</uuid>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <arch>x86_64</arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <microcode version='16777317'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <signature family='23' model='49' stepping='0'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='x2apic'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='tsc-deadline'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='osxsave'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='hypervisor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='tsc_adjust'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='spec-ctrl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='stibp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='arch-capabilities'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='cmp_legacy'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='topoext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='virt-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='lbrv'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='tsc-scale'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='vmcb-clean'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='pause-filter'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='pfthreshold'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='svme-addr-chk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='rdctl-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='mds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature name='pschange-mc-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <pages unit='KiB' size='4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <pages unit='KiB' size='2048'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <pages unit='KiB' size='1048576'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <power_management>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <suspend_mem/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </power_management>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <iommu support='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <migration_features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <live/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <uri_transports>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <uri_transport>tcp</uri_transport>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <uri_transport>rdma</uri_transport>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </uri_transports>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </migration_features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <topology>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <cells num='1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <cell id='0'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <memory unit='KiB'>7864352</memory>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <pages unit='KiB' size='4'>1966088</pages>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <pages unit='KiB' size='2048'>0</pages>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <distances>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <sibling id='0' value='10'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           </distances>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           <cpus num='8'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:           </cpus>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         </cell>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </cells>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </topology>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <cache>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </cache>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <secmodel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model>selinux</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <doi>0</doi>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </secmodel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <secmodel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model>dac</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <doi>0</doi>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </secmodel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </host>
Oct 10 10:05:46 compute-0 nova_compute[261329]: 
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <guest>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <os_type>hvm</os_type>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <arch name='i686'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <wordsize>32</wordsize>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <domain type='qemu'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <domain type='kvm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <pae/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <nonpae/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <apic default='on' toggle='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <cpuselection/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <deviceboot/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <externalSnapshot/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </guest>
Oct 10 10:05:46 compute-0 nova_compute[261329]: 
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <guest>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <os_type>hvm</os_type>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <arch name='x86_64'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <wordsize>64</wordsize>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <domain type='qemu'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <domain type='kvm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <apic default='on' toggle='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <cpuselection/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <deviceboot/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <externalSnapshot/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </guest>
Oct 10 10:05:46 compute-0 nova_compute[261329]: 
Oct 10 10:05:46 compute-0 nova_compute[261329]: </capabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]: 
Oct 10 10:05:46 compute-0 ceph-mon[73551]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.052 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.054 2 DEBUG nova.virt.libvirt.volume.mount [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.056 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 10:05:46 compute-0 nova_compute[261329]: <domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <arch>i686</arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <vcpu max='240'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <os supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='firmware'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <loader supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>rom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pflash</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='readonly'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>yes</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='secure'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </loader>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </os>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>file</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>anonymous</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>memfd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </memoryBacking>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <disk supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>disk</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cdrom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>floppy</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>lun</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ide</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>fdc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>sata</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vnc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>dbus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </graphics>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <video supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='modelType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vga</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cirrus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>none</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>bochs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ramfb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </video>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='mode'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>subsystem</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>mandatory</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>requisite</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>optional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pci</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hostdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <rng supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>random</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='driverType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>path</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>handle</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </filesystem>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emulator</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>external</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>2.0</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </tpm>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </redirdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <channel supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pty</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>unix</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </channel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>qemu</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </crypto>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <interface supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>passt</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <panic supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>isa</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>hyperv</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </panic>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <gic supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sev supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='features'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>relaxed</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vapic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vpindex</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>runtime</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>synic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>stimer</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reset</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>frequencies</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ipi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>avic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hyperv>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]: </domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.063 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 10:05:46 compute-0 nova_compute[261329]: <domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <arch>i686</arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <vcpu max='4096'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <os supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='firmware'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <loader supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>rom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pflash</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='readonly'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>yes</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='secure'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </loader>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </os>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>file</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>anonymous</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>memfd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </memoryBacking>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <disk supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>disk</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cdrom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>floppy</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>lun</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>fdc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>sata</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vnc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>dbus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </graphics>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <video supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='modelType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vga</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cirrus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>none</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>bochs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ramfb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </video>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='mode'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>subsystem</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>mandatory</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>requisite</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>optional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pci</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hostdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <rng supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>random</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='driverType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>path</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>handle</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </filesystem>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emulator</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>external</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>2.0</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </tpm>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </redirdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <channel supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pty</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>unix</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </channel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>qemu</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </crypto>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <interface supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>passt</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <panic supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>isa</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>hyperv</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </panic>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <gic supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sev supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='features'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>relaxed</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vapic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vpindex</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>runtime</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>synic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>stimer</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reset</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>frequencies</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ipi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>avic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hyperv>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]: </domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.090 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.093 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 10:05:46 compute-0 nova_compute[261329]: <domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <arch>x86_64</arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <vcpu max='240'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <os supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='firmware'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <loader supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>rom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pflash</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='readonly'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>yes</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='secure'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </loader>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </os>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>file</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>anonymous</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>memfd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </memoryBacking>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <disk supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>disk</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cdrom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>floppy</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>lun</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ide</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>fdc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>sata</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vnc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>dbus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </graphics>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <video supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='modelType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vga</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cirrus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>none</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>bochs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ramfb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </video>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='mode'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>subsystem</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>mandatory</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>requisite</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>optional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pci</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hostdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <rng supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>random</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='driverType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>path</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>handle</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </filesystem>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emulator</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>external</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>2.0</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </tpm>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </redirdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <channel supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pty</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>unix</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </channel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>qemu</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </crypto>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <interface supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>passt</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <panic supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>isa</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>hyperv</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </panic>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <gic supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sev supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='features'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>relaxed</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vapic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vpindex</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>runtime</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>synic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>stimer</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reset</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>frequencies</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ipi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>avic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hyperv>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]: </domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.153 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 10:05:46 compute-0 nova_compute[261329]: <domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <arch>x86_64</arch>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <vcpu max='4096'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <os supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='firmware'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>efi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <loader supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>rom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pflash</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='readonly'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>yes</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='secure'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>yes</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>no</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </loader>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </os>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>on</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>off</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xop'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='la57'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='hle'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='pku'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='erms'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='ss'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </blockers>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </mode>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>file</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>anonymous</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <value>memfd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </memoryBacking>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <disk supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>disk</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cdrom</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>floppy</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>lun</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>fdc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>sata</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vnc</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>dbus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </graphics>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <video supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='modelType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vga</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>cirrus</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>none</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>bochs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ramfb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </video>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='mode'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>subsystem</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>mandatory</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>requisite</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>optional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pci</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>scsi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hostdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <rng supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>random</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>egd</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='driverType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>path</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>handle</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </filesystem>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emulator</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>external</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>2.0</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </tpm>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='bus'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>usb</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </redirdev>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <channel supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>pty</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>unix</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </channel>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='type'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>qemu</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>builtin</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </crypto>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <interface supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='backendType'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>default</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>passt</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <panic supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='model'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>isa</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>hyperv</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </panic>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   <features>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <gic supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sev supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       <enum name='features'>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>relaxed</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vapic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vpindex</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>runtime</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>synic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>stimer</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reset</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>frequencies</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>ipi</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>avic</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-0 nova_compute[261329]:       </enum>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     </hyperv>
Oct 10 10:05:46 compute-0 nova_compute[261329]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-0 nova_compute[261329]:   </features>
Oct 10 10:05:46 compute-0 nova_compute[261329]: </domainCapabilities>
Oct 10 10:05:46 compute-0 nova_compute[261329]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.215 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.215 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.216 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.216 2 INFO nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Secure Boot support detected
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.218 2 INFO nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.218 2 INFO nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.228 2 DEBUG nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.258 2 INFO nova.virt.node [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Determined node identity 5b1ab6df-62aa-4a93-8e24-04440191f108 from /var/lib/nova/compute_id
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.277 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Verified node 5b1ab6df-62aa-4a93-8e24-04440191f108 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.306 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 10 10:05:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:05:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:05:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:46.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:46.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:46 compute-0 rsyslogd[1006]: imjournal from <np0005479821:nova_compute>: begin to drop messages due to rate-limiting
Oct 10 10:05:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:46 compute-0 nova_compute[261329]: 2025-10-10 10:05:46.950 2 ERROR nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Could not retrieve compute node resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '5b1ab6df-62aa-4a93-8e24-04440191f108' not found: No resource provider with uuid 5b1ab6df-62aa-4a93-8e24-04440191f108 found  ", "request_id": "req-25c2876c-0835-4690-b7ee-3f2c8cab6695"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '5b1ab6df-62aa-4a93-8e24-04440191f108' not found: No resource provider with uuid 5b1ab6df-62aa-4a93-8e24-04440191f108 found  ", "request_id": "req-25c2876c-0835-4690-b7ee-3f2c8cab6695"}]}
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.010 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.011 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.011 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.011 2 DEBUG nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.012 2 DEBUG oslo_concurrency.processutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1757370242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:47.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:47.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:47 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:47] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:47] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:47 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/39140762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139530236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.598 2 DEBUG oslo_concurrency.processutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.758 2 WARNING nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.760 2 DEBUG nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4913MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.760 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.761 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:47 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.907 2 ERROR nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '5b1ab6df-62aa-4a93-8e24-04440191f108' not found: No resource provider with uuid 5b1ab6df-62aa-4a93-8e24-04440191f108 found  ", "request_id": "req-c319a483-08f1-4fce-87e0-9b14b2d64192"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '5b1ab6df-62aa-4a93-8e24-04440191f108' not found: No resource provider with uuid 5b1ab6df-62aa-4a93-8e24-04440191f108 found  ", "request_id": "req-c319a483-08f1-4fce-87e0-9b14b2d64192"}]}
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.908 2 DEBUG nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:05:47 compute-0 nova_compute[261329]: 2025-10-10 10:05:47.908 2 DEBUG nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.078 2 INFO nova.scheduler.client.report [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [req-d10fc298-652c-42ef-92b2-af6eca24316e] Created resource provider record via placement API for resource provider with UUID 5b1ab6df-62aa-4a93-8e24-04440191f108 and name compute-0.ctlplane.example.com.
Oct 10 10:05:48 compute-0 ceph-mon[73551]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/39140762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3139530236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1942422153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.161 2 DEBUG oslo_concurrency.processutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:05:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:48.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:05:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:48.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204732520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.627 2 DEBUG oslo_concurrency.processutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.633 2 DEBUG nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 10 10:05:48 compute-0 nova_compute[261329]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.633 2 INFO nova.virt.libvirt.host [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] kernel doesn't support AMD SEV
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.634 2 DEBUG nova.compute.provider_tree [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.634 2 DEBUG nova.virt.libvirt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.685 2 DEBUG nova.scheduler.client.report [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Updated inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.686 2 DEBUG nova.compute.provider_tree [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Updating resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.686 2 DEBUG nova.compute.provider_tree [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:05:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.804 2 DEBUG nova.compute.provider_tree [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Updating resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.831 2 DEBUG nova.compute.resource_tracker [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.831 2 DEBUG oslo_concurrency.lockutils [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.832 2 DEBUG nova.service [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.910 2 DEBUG nova.service [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 10 10:05:48 compute-0 nova_compute[261329]: 2025-10-10 10:05:48.911 2 DEBUG nova.servicegroup.drivers.db [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 10 10:05:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/425651605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3204732520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:49 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00035f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:49 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00035f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:49 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:50 compute-0 ceph-mon[73551]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:50.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a8003f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:51 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a00035f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:52 compute-0 ceph-mon[73551]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:05:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:52.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:05:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:52.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c0003780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:53 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:54 compute-0 ceph-mon[73551]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:54.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:54.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:55 compute-0 podman[262031]: 2025-10-10 10:05:55.223848001 +0000 UTC m=+0.073790614 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 10 10:05:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:55 compute-0 sudo[262051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:55 compute-0 sudo[262051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:55 compute-0 sudo[262051]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:55 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:56 compute-0 ceph-mon[73551]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:56.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:56.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:05:57.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:05:57 compute-0 ceph-mon[73551]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:57] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:05:57] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:05:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:57 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a40028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:05:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:58.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:59 compute-0 ceph-mon[73551]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:05:59 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:00.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:00.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:06:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:01 compute-0 ceph-mon[73551]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:01 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:02.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:03 compute-0 ceph-mon[73551]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:03 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:04.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:04.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:05 compute-0 ceph-mon[73551]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:05 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:06.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:07.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:06:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:06:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:06:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:07 compute-0 ceph-mon[73551]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:07 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:08.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:09 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:09 compute-0 ceph-mon[73551]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:10 compute-0 podman[262091]: 2025-10-10 10:06:10.219010862 +0000 UTC m=+0.065589639 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:06:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:10.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:11 compute-0 podman[262113]: 2025-10-10 10:06:11.221631921 +0000 UTC m=+0.069806626 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:06:11 compute-0 podman[262114]: 2025-10-10 10:06:11.252347023 +0000 UTC m=+0.093613334 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 10:06:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:11 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:11 compute-0 ceph-mon[73551]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:06:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:06:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:13 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:13 compute-0 ceph-mon[73551]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1790607881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1790607881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:06:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:06:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:06:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:14.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:06:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b0003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26a0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:15 compute-0 sudo[262161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:15 compute-0 sudo[262161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:15 compute-0 sudo[262161]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:15 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:15 compute-0 ceph-mon[73551]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:06:16
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', '.nfs', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:06:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:06:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:06:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:06:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:16.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:06:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:17.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:06:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:17.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:06:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:17 compute-0 kernel: ganesha.nfsd[262028]: segfault at 50 ip 00007f27856ff32e sp 00007f27537fd210 error 4 in libntirpc.so.5.8[7f27856e4000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 10:06:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:06:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[252863]: 10/10/2025 10:06:17 : epoch 68e8da37 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26b4003e60 fd 39 proxy ignored for local
Oct 10 10:06:17 compute-0 systemd[1]: Started Process Core Dump (PID 262188/UID 0).
Oct 10 10:06:17 compute-0 ceph-mon[73551]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:18.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:18 compute-0 systemd-coredump[262189]: Process 252867 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f27856ff32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:06:18 compute-0 systemd[1]: systemd-coredump@6-262188-0.service: Deactivated successfully.
Oct 10 10:06:18 compute-0 systemd[1]: systemd-coredump@6-262188-0.service: Consumed 1.135s CPU time.
Oct 10 10:06:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:06:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:06:18 compute-0 podman[262196]: 2025-10-10 10:06:18.691781127 +0000 UTC m=+0.027839790 container died c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:06:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2dcde97246b697626d670fb7e173e5d7238bfb022af9eecbd981b5d1f028f0f-merged.mount: Deactivated successfully.
Oct 10 10:06:18 compute-0 podman[262196]: 2025-10-10 10:06:18.754849785 +0000 UTC m=+0.090908428 container remove c5924c96619f120cfd7480d27fef3f5723b94a395c2a3a65294d1f3dffa6035c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Oct 10 10:06:18 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:06:18 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:06:18 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.534s CPU time.
Oct 10 10:06:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:19 compute-0 ceph-mon[73551]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:06:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:06:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:21 compute-0 ceph-mon[73551]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:22.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:06:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100623 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:06:24 compute-0 ceph-mon[73551]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:06:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:24.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:06:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:24.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:06:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:26 compute-0 ceph-mon[73551]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:26 compute-0 podman[262245]: 2025-10-10 10:06:26.260706066 +0000 UTC m=+0.100250989 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:06:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:26.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:27.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:06:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:27.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:06:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:28 compute-0 ceph-mon[73551]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:28.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 7.
Oct 10 10:06:29 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:06:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.534s CPU time.
Oct 10 10:06:29 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:06:29 compute-0 podman[262321]: 2025-10-10 10:06:29.408242389 +0000 UTC m=+0.072290695 container create b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e5f7b8a5a794d8e210571ba096319c01a6e0e6deed160ff8b7b5cf4a5868ee/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:29 compute-0 podman[262321]: 2025-10-10 10:06:29.381100913 +0000 UTC m=+0.045149239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e5f7b8a5a794d8e210571ba096319c01a6e0e6deed160ff8b7b5cf4a5868ee/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e5f7b8a5a794d8e210571ba096319c01a6e0e6deed160ff8b7b5cf4a5868ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e5f7b8a5a794d8e210571ba096319c01a6e0e6deed160ff8b7b5cf4a5868ee/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:29 compute-0 podman[262321]: 2025-10-10 10:06:29.49000241 +0000 UTC m=+0.154050776 container init b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:06:29 compute-0 podman[262321]: 2025-10-10 10:06:29.500183189 +0000 UTC m=+0.164231485 container start b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:06:29 compute-0 bash[262321]: b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac
Oct 10 10:06:29 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:06:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:29 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:06:30 compute-0 ceph-mon[73551]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:30.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:06:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:32 compute-0 ceph-mon[73551]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:32.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:34 compute-0 ceph-mon[73551]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:34.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:35 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:06:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:35 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:06:35 compute-0 sudo[262384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:35 compute-0 sudo[262384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:35 compute-0 sudo[262384]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:36 compute-0 ceph-mon[73551]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:37.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:06:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:06:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:37] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:06:38 compute-0 ceph-mon[73551]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:38.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:38.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:40 compute-0 ceph-mon[73551]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:40.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:40.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:41 compute-0 podman[262415]: 2025-10-10 10:06:41.221278495 +0000 UTC m=+0.062819110 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:06:41 compute-0 podman[262435]: 2025-10-10 10:06:41.315492837 +0000 UTC m=+0.060062930 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:06:41 compute-0 podman[262454]: 2025-10-10 10:06:41.440874046 +0000 UTC m=+0.093815300 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:06:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:41 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a90000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:06:41.892 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:06:41.893 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:06:41.893 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:42 compute-0 ceph-mon[73551]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:42.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:06:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:43 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:43 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a64000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:43 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a60000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:44 compute-0 ceph-mon[73551]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:06:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:44.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:45 compute-0 ceph-mon[73551]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100645 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:06:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:45 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:45 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:45 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:46 compute-0 sudo[262500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:06:46 compute-0 sudo[262500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-0 sudo[262500]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:46 compute-0 sudo[262525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:06:46 compute-0 sudo[262525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3155134759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:06:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:46.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:46 compute-0 sudo[262525]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:06:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:06:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:06:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:46 compute-0 sudo[262581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:06:46 compute-0 sudo[262581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-0 sudo[262581]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:46 compute-0 sudo[262607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:06:46 compute-0 sudo[262607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-0 nova_compute[261329]: 2025-10-10 10:06:46.914 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:46 compute-0 nova_compute[261329]: 2025-10-10 10:06:46.915 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:46 compute-0 nova_compute[261329]: 2025-10-10 10:06:46.915 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:06:46 compute-0 nova_compute[261329]: 2025-10-10 10:06:46.915 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.051 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.051 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.052 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.052 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.053 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.053 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.053 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:47.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.116 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.117 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.117 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.206 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.206 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.207 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.207 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.207 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1920469742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2009889929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2676868760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.361906544 +0000 UTC m=+0.050525633 container create 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:06:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:47] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:47] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:47 compute-0 systemd[1]: Started libpod-conmon-5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37.scope.
Oct 10 10:06:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:47 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.341093262 +0000 UTC m=+0.029712381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.444737349 +0000 UTC m=+0.133356448 container init 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.450834836 +0000 UTC m=+0.139453925 container start 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.453847193 +0000 UTC m=+0.142466332 container attach 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 10:06:47 compute-0 epic_pare[262705]: 167 167
Oct 10 10:06:47 compute-0 systemd[1]: libpod-5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37.scope: Deactivated successfully.
Oct 10 10:06:47 compute-0 conmon[262705]: conmon 5c06429854cc52b9aaa5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37.scope/container/memory.events
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.457185571 +0000 UTC m=+0.145804660 container died 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7640a7b7d9e487dceb9e37149a52fff4cfb3ccbac6ae52ce21a10d24eca22478-merged.mount: Deactivated successfully.
Oct 10 10:06:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:47 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:47 compute-0 podman[262673]: 2025-10-10 10:06:47.490637291 +0000 UTC m=+0.179256390 container remove 5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_pare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:06:47 compute-0 systemd[1]: libpod-conmon-5c06429854cc52b9aaa5b873a85a648878d48bc27a0dff641720f872ae4dfe37.scope: Deactivated successfully.
Oct 10 10:06:47 compute-0 podman[262728]: 2025-10-10 10:06:47.670562902 +0000 UTC m=+0.043800825 container create cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:06:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:06:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854743988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-0 systemd[1]: Started libpod-conmon-cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6.scope.
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.715 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:06:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:47 compute-0 podman[262728]: 2025-10-10 10:06:47.650189754 +0000 UTC m=+0.023427707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:47 compute-0 podman[262728]: 2025-10-10 10:06:47.758337007 +0000 UTC m=+0.131574960 container init cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 10:06:47 compute-0 podman[262728]: 2025-10-10 10:06:47.766080367 +0000 UTC m=+0.139318290 container start cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:06:47 compute-0 podman[262728]: 2025-10-10 10:06:47.769888331 +0000 UTC m=+0.143126274 container attach cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.857 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.858 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.859 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:47 compute-0 nova_compute[261329]: 2025-10-10 10:06:47.859 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:47 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:48 compute-0 stupefied_davinci[262747]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:06:48 compute-0 stupefied_davinci[262747]: --> All data devices are unavailable
Oct 10 10:06:48 compute-0 systemd[1]: libpod-cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6.scope: Deactivated successfully.
Oct 10 10:06:48 compute-0 podman[262728]: 2025-10-10 10:06:48.103401321 +0000 UTC m=+0.476639244 container died cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 10:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-32fd8a1b77dcc1d343ad8f8a9b0ca42c24778cb75ebc65b54c72f0835baaa4b5-merged.mount: Deactivated successfully.
Oct 10 10:06:48 compute-0 podman[262728]: 2025-10-10 10:06:48.152258769 +0000 UTC m=+0.525496692 container remove cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:06:48 compute-0 systemd[1]: libpod-conmon-cb9b9c30aef51de50d6f78edf57672cd7a478255a1ea42d163981c25b2e9f4e6.scope: Deactivated successfully.
Oct 10 10:06:48 compute-0 sudo[262607]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:48 compute-0 sudo[262778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:06:48 compute-0 sudo[262778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:48 compute-0 sudo[262778]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3854743988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:48 compute-0 sudo[262803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:06:48 compute-0 sudo[262803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:48 compute-0 nova_compute[261329]: 2025-10-10 10:06:48.489 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:06:48 compute-0 nova_compute[261329]: 2025-10-10 10:06:48.490 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:06:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:48 compute-0 nova_compute[261329]: 2025-10-10 10:06:48.572 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:06:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.766207638 +0000 UTC m=+0.041876883 container create fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:06:48 compute-0 systemd[1]: Started libpod-conmon-fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d.scope.
Oct 10 10:06:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.844705703 +0000 UTC m=+0.120374968 container init fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.751842804 +0000 UTC m=+0.027512079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.853477807 +0000 UTC m=+0.129147052 container start fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.857384722 +0000 UTC m=+0.133053987 container attach fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:06:48 compute-0 romantic_swanson[262908]: 167 167
Oct 10 10:06:48 compute-0 systemd[1]: libpod-fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d.scope: Deactivated successfully.
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.860214784 +0000 UTC m=+0.135884039 container died fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd2fcbe72f4b50a6281901606bd45c306ebd70cc25ec36134d1dddc5117ff636-merged.mount: Deactivated successfully.
Oct 10 10:06:48 compute-0 podman[262890]: 2025-10-10 10:06:48.901106975 +0000 UTC m=+0.176776210 container remove fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:06:48 compute-0 systemd[1]: libpod-conmon-fbd71304872ca0552115db941d8ec34067921ff2d9e0af684f36349dde800c6d.scope: Deactivated successfully.
Oct 10 10:06:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:06:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886198263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.019 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.025 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.060 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.062 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.062 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:49 compute-0 nova_compute[261329]: 2025-10-10 10:06:49.062 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.073707279 +0000 UTC m=+0.043664142 container create a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:06:49 compute-0 systemd[1]: Started libpod-conmon-a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31.scope.
Oct 10 10:06:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab30d86eadb9980b185f7ed4864b5d63a5dc2209b158a7ec91f0114cab64687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab30d86eadb9980b185f7ed4864b5d63a5dc2209b158a7ec91f0114cab64687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab30d86eadb9980b185f7ed4864b5d63a5dc2209b158a7ec91f0114cab64687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab30d86eadb9980b185f7ed4864b5d63a5dc2209b158a7ec91f0114cab64687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.151569313 +0000 UTC m=+0.121526196 container init a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.057528257 +0000 UTC m=+0.027485140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.157817195 +0000 UTC m=+0.127774058 container start a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.161981659 +0000 UTC m=+0.131938522 container attach a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:06:49 compute-0 ceph-mon[73551]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1886198263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:49 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:49 compute-0 distracted_galois[262952]: {
Oct 10 10:06:49 compute-0 distracted_galois[262952]:     "0": [
Oct 10 10:06:49 compute-0 distracted_galois[262952]:         {
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "devices": [
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "/dev/loop3"
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             ],
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "lv_name": "ceph_lv0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "lv_size": "21470642176",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "name": "ceph_lv0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "tags": {
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.cluster_name": "ceph",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.crush_device_class": "",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.encrypted": "0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.osd_id": "0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.type": "block",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.vdo": "0",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:                 "ceph.with_tpm": "0"
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             },
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "type": "block",
Oct 10 10:06:49 compute-0 distracted_galois[262952]:             "vg_name": "ceph_vg0"
Oct 10 10:06:49 compute-0 distracted_galois[262952]:         }
Oct 10 10:06:49 compute-0 distracted_galois[262952]:     ]
Oct 10 10:06:49 compute-0 distracted_galois[262952]: }
Oct 10 10:06:49 compute-0 systemd[1]: libpod-a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31.scope: Deactivated successfully.
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.47997114 +0000 UTC m=+0.449928023 container died a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:06:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:49 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ab30d86eadb9980b185f7ed4864b5d63a5dc2209b158a7ec91f0114cab64687-merged.mount: Deactivated successfully.
Oct 10 10:06:49 compute-0 podman[262936]: 2025-10-10 10:06:49.5282902 +0000 UTC m=+0.498247083 container remove a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:06:49 compute-0 systemd[1]: libpod-conmon-a082adb8dc155a87c7f648fac541d42f3a5dd8df3a6358537b90cdbd73219f31.scope: Deactivated successfully.
Oct 10 10:06:49 compute-0 sudo[262803]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:49 compute-0 sudo[262974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:06:49 compute-0 sudo[262974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:49 compute-0 sudo[262974]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:49 compute-0 sudo[262999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:06:49 compute-0 sudo[262999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:49 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.089022079 +0000 UTC m=+0.046709409 container create 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:06:50 compute-0 systemd[1]: Started libpod-conmon-341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40.scope.
Oct 10 10:06:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.064688584 +0000 UTC m=+0.022375954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.173474117 +0000 UTC m=+0.131161487 container init 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.181639711 +0000 UTC m=+0.139327071 container start 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.18533278 +0000 UTC m=+0.143020160 container attach 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:06:50 compute-0 awesome_goldberg[263082]: 167 167
Oct 10 10:06:50 compute-0 systemd[1]: libpod-341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40.scope: Deactivated successfully.
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.188089789 +0000 UTC m=+0.145777119 container died 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 10:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3cf12ab80e29b44f5f22f52d44fd5620b214be54cb7136a5011c65a0ed4ddb-merged.mount: Deactivated successfully.
Oct 10 10:06:50 compute-0 podman[263065]: 2025-10-10 10:06:50.22372385 +0000 UTC m=+0.181411180 container remove 341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:06:50 compute-0 systemd[1]: libpod-conmon-341a731b0b1a58ee235d480df97a02f4a96a6a1aaa4e81e422dadc59dd447f40.scope: Deactivated successfully.
Oct 10 10:06:50 compute-0 podman[263108]: 2025-10-10 10:06:50.395609302 +0000 UTC m=+0.055784363 container create 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:06:50 compute-0 systemd[1]: Started libpod-conmon-74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da.scope.
Oct 10 10:06:50 compute-0 podman[263108]: 2025-10-10 10:06:50.365825829 +0000 UTC m=+0.026000960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2def8108eb1442abd32b659247fdf95b323c45c53e2694d1468bb6dd81e9794/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2def8108eb1442abd32b659247fdf95b323c45c53e2694d1468bb6dd81e9794/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2def8108eb1442abd32b659247fdf95b323c45c53e2694d1468bb6dd81e9794/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2def8108eb1442abd32b659247fdf95b323c45c53e2694d1468bb6dd81e9794/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:50 compute-0 podman[263108]: 2025-10-10 10:06:50.477378652 +0000 UTC m=+0.137553723 container init 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:06:50 compute-0 podman[263108]: 2025-10-10 10:06:50.484171801 +0000 UTC m=+0.144346872 container start 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:06:50 compute-0 podman[263108]: 2025-10-10 10:06:50.487164698 +0000 UTC m=+0.147339769 container attach 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:06:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:50.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:51 compute-0 lvm[263199]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:06:51 compute-0 lvm[263199]: VG ceph_vg0 finished
Oct 10 10:06:51 compute-0 nifty_yalow[263124]: {}
Oct 10 10:06:51 compute-0 systemd[1]: libpod-74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da.scope: Deactivated successfully.
Oct 10 10:06:51 compute-0 podman[263108]: 2025-10-10 10:06:51.245048005 +0000 UTC m=+0.905223076 container died 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:06:51 compute-0 systemd[1]: libpod-74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da.scope: Consumed 1.259s CPU time.
Oct 10 10:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2def8108eb1442abd32b659247fdf95b323c45c53e2694d1468bb6dd81e9794-merged.mount: Deactivated successfully.
Oct 10 10:06:51 compute-0 podman[263108]: 2025-10-10 10:06:51.294076039 +0000 UTC m=+0.954251110 container remove 74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_yalow, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:06:51 compute-0 systemd[1]: libpod-conmon-74372944f2a9c5c20af40cccabf775211660eb2261bb8a5527c489ff4286e2da.scope: Deactivated successfully.
Oct 10 10:06:51 compute-0 sudo[262999]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:06:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:06:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:51 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:51 compute-0 sudo[263213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:06:51 compute-0 sudo[263213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:51 compute-0 sudo[263213]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:51 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:51 compute-0 ceph-mon[73551]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:51 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100652 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:06:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:53 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:53 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:53 compute-0 ceph-mon[73551]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:53 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:06:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:54.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:06:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:55 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:55 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:55 compute-0 ceph-mon[73551]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:55 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a60002050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:56 compute-0 sudo[263242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:56 compute-0 sudo[263242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:56 compute-0 sudo[263242]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:56.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:57.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:06:57.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:06:57 compute-0 podman[263269]: 2025-10-10 10:06:57.235363672 +0000 UTC m=+0.077370361 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:57] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:06:57] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:57 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a60002050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:57 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:57 compute-0 ceph-mon[73551]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:57 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a6c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:06:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:06:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:58.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:06:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:59 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a60002050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:59 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a640032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:59 compute-0 ceph-mon[73551]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:06:59 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:00.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:07:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:07:00 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:07:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:01 compute-0 kernel: ganesha.nfsd[262483]: segfault at 50 ip 00007f5b3bab632e sp 00007f5b09ffa210 error 4 in libntirpc.so.5.8[7f5b3ba9b000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 10 10:07:01 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:07:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[262336]: 10/10/2025 10:07:01 : epoch 68e8daa5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a84001c00 fd 48 proxy ignored for local
Oct 10 10:07:01 compute-0 systemd[1]: Started Process Core Dump (PID 263292/UID 0).
Oct 10 10:07:01 compute-0 ceph-mon[73551]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:07:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:02.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:02 compute-0 systemd-coredump[263293]: Process 262340 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f5b3bab632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:07:02 compute-0 systemd[1]: systemd-coredump@7-263292-0.service: Deactivated successfully.
Oct 10 10:07:02 compute-0 systemd[1]: systemd-coredump@7-263292-0.service: Consumed 1.202s CPU time.
Oct 10 10:07:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:02 compute-0 podman[263299]: 2025-10-10 10:07:02.822188175 +0000 UTC m=+0.041442500 container died b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-58e5f7b8a5a794d8e210571ba096319c01a6e0e6deed160ff8b7b5cf4a5868ee-merged.mount: Deactivated successfully.
Oct 10 10:07:02 compute-0 podman[263299]: 2025-10-10 10:07:02.884928121 +0000 UTC m=+0.104182436 container remove b64e2c28c00cdad64520497b05a727c694deda0f85dc605ac7e627b9d69aabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:07:02 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:07:03 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:07:03 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.643s CPU time.
Oct 10 10:07:03 compute-0 ceph-mon[73551]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:04.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.24748 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:05 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:05 compute-0 ceph-mon[73551]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/919671696' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/4284610426' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-0 rsyslogd[1006]: imjournal: 3918 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 10 10:07:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:07:06 compute-0 ceph-mon[73551]: from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-0 ceph-mon[73551]: from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-0 ceph-mon[73551]: from='client.24748 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:07.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:07:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:07] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:07:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100707 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:07:07 compute-0 ceph-mon[73551]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:07:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:09 compute-0 ceph-mon[73551]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:07:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:07:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:12 compute-0 ceph-mon[73551]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:12 compute-0 podman[263352]: 2025-10-10 10:07:12.243943202 +0000 UTC m=+0.075943874 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 10 10:07:12 compute-0 podman[263354]: 2025-10-10 10:07:12.258220092 +0000 UTC m=+0.092212309 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 10 10:07:12 compute-0 podman[263353]: 2025-10-10 10:07:12.284356966 +0000 UTC m=+0.110615573 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:07:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100712 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:07:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:07:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:13 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 8.
Oct 10 10:07:13 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:07:13 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.643s CPU time.
Oct 10 10:07:13 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:07:13 compute-0 podman[263468]: 2025-10-10 10:07:13.382311156 +0000 UTC m=+0.055639688 container create ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70148110a2a72389be3fc38b2754503c3c3865bffb9276d7cf22f1ff1a166d3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70148110a2a72389be3fc38b2754503c3c3865bffb9276d7cf22f1ff1a166d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70148110a2a72389be3fc38b2754503c3c3865bffb9276d7cf22f1ff1a166d3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70148110a2a72389be3fc38b2754503c3c3865bffb9276d7cf22f1ff1a166d3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:13 compute-0 podman[263468]: 2025-10-10 10:07:13.443686828 +0000 UTC m=+0.117015360 container init ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:07:13 compute-0 podman[263468]: 2025-10-10 10:07:13.451022625 +0000 UTC m=+0.124351157 container start ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 10:07:13 compute-0 bash[263468]: ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90
Oct 10 10:07:13 compute-0 podman[263468]: 2025-10-10 10:07:13.361933558 +0000 UTC m=+0.035262120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:07:13 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:07:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:13 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:14 compute-0 ceph-mon[73551]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:14.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:14.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:16 compute-0 ceph-mon[73551]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:16 compute-0 sudo[263528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:16 compute-0 sudo[263528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:16 compute-0 sudo[263528]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:07:16
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', 'vms', '.rgw.root', 'images', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes']
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:07:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:07:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:07:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:16.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:07:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:07:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:17.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:07:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:17] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:07:18 compute-0 ceph-mon[73551]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:18.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:18.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:19 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:19 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:20 compute-0 ceph-mon[73551]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:20.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:20.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:07:22 compute-0 ceph-mon[73551]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:07:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:22.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100722 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:07:24 compute-0 ceph-mon[73551]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:24.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:24.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 10 10:07:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3351629822' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.24766 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:25 compute-0 ceph-mgr[73845]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 10:07:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3351629822' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1743770823' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:25 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5cc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:26 compute-0 ceph-mon[73551]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:26 compute-0 ceph-mon[73551]: from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-0 ceph-mon[73551]: from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-0 ceph-mon[73551]: from='client.24766 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:26.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:27.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1938819953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:07:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1938819953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:07:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:07:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:27] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:07:27 compute-0 kernel: ganesha.nfsd[263566]: segfault at 50 ip 00007fc67aa9532e sp 00007fc645ffa210 error 4 in libntirpc.so.5.8[7fc67aa7a000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 10:07:27 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:07:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263484]: 10/10/2025 10:07:27 : epoch 68e8dad1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5c8001c40 fd 38 proxy ignored for local
Oct 10 10:07:27 compute-0 systemd[1]: Started Process Core Dump (PID 263582/UID 0).
Oct 10 10:07:27 compute-0 podman[263583]: 2025-10-10 10:07:27.570258964 +0000 UTC m=+0.059630917 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 10 10:07:28 compute-0 ceph-mon[73551]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:28.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:28 compute-0 systemd-coredump[263584]: Process 263488 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007fc67aa9532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:07:28 compute-0 systemd[1]: systemd-coredump@8-263582-0.service: Deactivated successfully.
Oct 10 10:07:28 compute-0 systemd[1]: systemd-coredump@8-263582-0.service: Consumed 1.176s CPU time.
Oct 10 10:07:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:07:28 compute-0 podman[263610]: 2025-10-10 10:07:28.789636106 +0000 UTC m=+0.037588191 container died ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f70148110a2a72389be3fc38b2754503c3c3865bffb9276d7cf22f1ff1a166d3-merged.mount: Deactivated successfully.
Oct 10 10:07:28 compute-0 podman[263610]: 2025-10-10 10:07:28.834379109 +0000 UTC m=+0.082331174 container remove ffb6698e3865bae664929ab529d25ebe19f85e69a372434db20fff0b2f207d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:07:28 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:07:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:07:29 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.463s CPU time.
Oct 10 10:07:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:30 compute-0 ceph-mon[73551]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:07:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Oct 10 10:07:31 compute-0 ceph-mon[73551]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Oct 10 10:07:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:07:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 10 10:07:33 compute-0 ceph-mon[73551]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 10 10:07:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:34.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:35 compute-0 ceph-mon[73551]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:36 compute-0 sudo[263662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:36 compute-0 sudo[263662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:36 compute-0 sudo[263662]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000066s ======
Oct 10 10:07:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Oct 10 10:07:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:36.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:37.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:37] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:07:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:37] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 10 10:07:37 compute-0 ceph-mon[73551]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:38.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:38.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Oct 10 10:07:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 9.
Oct 10 10:07:39 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:07:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.463s CPU time.
Oct 10 10:07:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:39 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:07:39 compute-0 podman[263742]: 2025-10-10 10:07:39.409865788 +0000 UTC m=+0.051440942 container create f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct 10 10:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c38c991bbdd92fdd227423338f523a43940844dd05dc41d41dd6d8ded1e10c5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c38c991bbdd92fdd227423338f523a43940844dd05dc41d41dd6d8ded1e10c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c38c991bbdd92fdd227423338f523a43940844dd05dc41d41dd6d8ded1e10c5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c38c991bbdd92fdd227423338f523a43940844dd05dc41d41dd6d8ded1e10c5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:39 compute-0 podman[263742]: 2025-10-10 10:07:39.470396293 +0000 UTC m=+0.111971497 container init f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:07:39 compute-0 podman[263742]: 2025-10-10 10:07:39.475414107 +0000 UTC m=+0.116989281 container start f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:07:39 compute-0 bash[263742]: f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe
Oct 10 10:07:39 compute-0 podman[263742]: 2025-10-10 10:07:39.390946394 +0000 UTC m=+0.032521568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:07:39 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:07:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:39 compute-0 ceph-mon[73551]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Oct 10 10:07:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:07:41 compute-0 ceph-mon[73551]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:07:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:07:41.893 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:07:41.894 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:07:41.894 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:07:43 compute-0 podman[263804]: 2025-10-10 10:07:43.241506947 +0000 UTC m=+0.085024752 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible)
Oct 10 10:07:43 compute-0 podman[263803]: 2025-10-10 10:07:43.253543188 +0000 UTC m=+0.095557144 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 10:07:43 compute-0 podman[263805]: 2025-10-10 10:07:43.275645186 +0000 UTC m=+0.112820075 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:07:43 compute-0 ceph-mon[73551]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:07:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:45 compute-0 ceph-mon[73551]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:07:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:07:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:46.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:46.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.382 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.382 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:47] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:07:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:47] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.404 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.405 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.405 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.423 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.424 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.425 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.425 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.426 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.427 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.427 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.427 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.428 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.451 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.452 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.452 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.453 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.453 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:07:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:07:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191965780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-0 ceph-mon[73551]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4016404872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3155772583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1191965780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-0 nova_compute[261329]: 2025-10-10 10:07:47.968 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.125 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.126 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4872MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.126 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.127 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.179 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.179 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.194 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:07:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:48.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:07:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798602020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.636 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.641 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.664 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.667 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:07:48 compute-0 nova_compute[261329]: 2025-10-10 10:07:48.667 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:07:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100748 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/85312783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3798602020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4252308067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:49 compute-0 ceph-mon[73551]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:07:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001d:nfs.cephfs.2: -2
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:51 compute-0 sudo[263929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:07:51 compute-0 sudo[263929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:51 compute-0 sudo[263929]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:51 compute-0 sudo[263954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:07:51 compute-0 sudo[263954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a08000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:07:52 compute-0 sudo[263954]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:07:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:52.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:07:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:07:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:52.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:52 compute-0 sudo[264014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:07:52 compute-0 sudo[264014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:52 compute-0 sudo[264014]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:52 compute-0 sudo[264039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:07:52 compute-0 sudo[264039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:07:53 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.216527555 +0000 UTC m=+0.044259108 container create 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:07:53 compute-0 systemd[1]: Started libpod-conmon-7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248.scope.
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.194581043 +0000 UTC m=+0.022312566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.312649787 +0000 UTC m=+0.140381380 container init 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.320944907 +0000 UTC m=+0.148676420 container start 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:53 compute-0 upbeat_germain[264124]: 167 167
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.32473735 +0000 UTC m=+0.152468903 container attach 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:07:53 compute-0 systemd[1]: libpod-7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248.scope: Deactivated successfully.
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.327876761 +0000 UTC m=+0.155608294 container died 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1de32da3ab370dc6b8cabc7cc7f97093c9cf94c8297e13357f1ba4718256d2-merged.mount: Deactivated successfully.
Oct 10 10:07:53 compute-0 podman[264107]: 2025-10-10 10:07:53.364927515 +0000 UTC m=+0.192659028 container remove 7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_germain, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 10:07:53 compute-0 systemd[1]: libpod-conmon-7f6e14efa2e0ebcb992411a5294d416dcfc8c18de560d18ad6adc3cf26ca2248.scope: Deactivated successfully.
Oct 10 10:07:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:53 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:53 compute-0 podman[264149]: 2025-10-10 10:07:53.534587155 +0000 UTC m=+0.045641373 container create 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:07:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:53 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:53 compute-0 systemd[1]: Started libpod-conmon-9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259.scope.
Oct 10 10:07:53 compute-0 podman[264149]: 2025-10-10 10:07:53.512962102 +0000 UTC m=+0.024016340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:53 compute-0 podman[264149]: 2025-10-10 10:07:53.65511061 +0000 UTC m=+0.166164808 container init 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:07:53 compute-0 podman[264149]: 2025-10-10 10:07:53.663985167 +0000 UTC m=+0.175039365 container start 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:07:53 compute-0 podman[264149]: 2025-10-10 10:07:53.668037529 +0000 UTC m=+0.179091737 container attach 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:07:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:53 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:53 compute-0 heuristic_brahmagupta[264166]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:07:53 compute-0 heuristic_brahmagupta[264166]: --> All data devices are unavailable
Oct 10 10:07:54 compute-0 systemd[1]: libpod-9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259.scope: Deactivated successfully.
Oct 10 10:07:54 compute-0 podman[264149]: 2025-10-10 10:07:54.009167308 +0000 UTC m=+0.520221506 container died 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:07:54 compute-0 ceph-mon[73551]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e8164f4dfab00e8110df807239ecea08341d42b4a8138762063750d2e299a58-merged.mount: Deactivated successfully.
Oct 10 10:07:54 compute-0 podman[264149]: 2025-10-10 10:07:54.048832796 +0000 UTC m=+0.559886984 container remove 9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_brahmagupta, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:07:54 compute-0 systemd[1]: libpod-conmon-9f58be333274b140948351f8656902145ff6d1198bfa177b6b43483fa353a259.scope: Deactivated successfully.
Oct 10 10:07:54 compute-0 sudo[264039]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:54 compute-0 sudo[264194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:07:54 compute-0 sudo[264194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:54 compute-0 sudo[264194]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:54 compute-0 sudo[264219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:07:54 compute-0 sudo[264219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:54.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:54.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.652730739 +0000 UTC m=+0.057341533 container create d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 10:07:54 compute-0 systemd[1]: Started libpod-conmon-d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951.scope.
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.619250191 +0000 UTC m=+0.023861055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.741789001 +0000 UTC m=+0.146399785 container init d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.7549929 +0000 UTC m=+0.159603694 container start d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.76084046 +0000 UTC m=+0.165451304 container attach d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:54 compute-0 elegant_mahavira[264301]: 167 167
Oct 10 10:07:54 compute-0 systemd[1]: libpod-d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951.scope: Deactivated successfully.
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.762959639 +0000 UTC m=+0.167570423 container died d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-19e20e5298a8d1eace54a573f8260bac58c04479018d6ddd30a5ed0ff87838f3-merged.mount: Deactivated successfully.
Oct 10 10:07:54 compute-0 podman[264285]: 2025-10-10 10:07:54.820262549 +0000 UTC m=+0.224873303 container remove d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 10:07:54 compute-0 systemd[1]: libpod-conmon-d76a3fff46eedfa4a3b74939c4f410c0c2d49ba7dde3e04a4b8522aead6b3951.scope: Deactivated successfully.
Oct 10 10:07:55 compute-0 podman[264328]: 2025-10-10 10:07:55.017488145 +0000 UTC m=+0.048580809 container create cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:07:55 compute-0 systemd[1]: Started libpod-conmon-cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b.scope.
Oct 10 10:07:55 compute-0 podman[264328]: 2025-10-10 10:07:54.994062154 +0000 UTC m=+0.025154808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1d65c679f7679afaf30c3a3f7be3108845de03a9adc52e8a0803943d74a0ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1d65c679f7679afaf30c3a3f7be3108845de03a9adc52e8a0803943d74a0ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1d65c679f7679afaf30c3a3f7be3108845de03a9adc52e8a0803943d74a0ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1d65c679f7679afaf30c3a3f7be3108845de03a9adc52e8a0803943d74a0ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:55 compute-0 podman[264328]: 2025-10-10 10:07:55.128889313 +0000 UTC m=+0.159981987 container init cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:07:55 compute-0 podman[264328]: 2025-10-10 10:07:55.136948414 +0000 UTC m=+0.168041058 container start cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 10:07:55 compute-0 podman[264328]: 2025-10-10 10:07:55.140463629 +0000 UTC m=+0.171556313 container attach cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:07:55 compute-0 gallant_buck[264345]: {
Oct 10 10:07:55 compute-0 gallant_buck[264345]:     "0": [
Oct 10 10:07:55 compute-0 gallant_buck[264345]:         {
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "devices": [
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "/dev/loop3"
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             ],
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "lv_name": "ceph_lv0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "lv_size": "21470642176",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "name": "ceph_lv0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "tags": {
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.cluster_name": "ceph",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.crush_device_class": "",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.encrypted": "0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.osd_id": "0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.type": "block",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.vdo": "0",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:                 "ceph.with_tpm": "0"
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             },
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "type": "block",
Oct 10 10:07:55 compute-0 gallant_buck[264345]:             "vg_name": "ceph_vg0"
Oct 10 10:07:55 compute-0 gallant_buck[264345]:         }
Oct 10 10:07:55 compute-0 gallant_buck[264345]:     ]
Oct 10 10:07:55 compute-0 gallant_buck[264345]: }
Oct 10 10:07:55 compute-0 systemd[1]: libpod-cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b.scope: Deactivated successfully.
Oct 10 10:07:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100755 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:55 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:55 compute-0 podman[264354]: 2025-10-10 10:07:55.541425702 +0000 UTC m=+0.044371832 container died cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:07:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:55 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd1d65c679f7679afaf30c3a3f7be3108845de03a9adc52e8a0803943d74a0ec-merged.mount: Deactivated successfully.
Oct 10 10:07:55 compute-0 podman[264354]: 2025-10-10 10:07:55.581576115 +0000 UTC m=+0.084522165 container remove cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:55 compute-0 systemd[1]: libpod-conmon-cf6414ef0433f4a0f3a598721330cec9c5d2353e35ce37347c5b95924e30794b.scope: Deactivated successfully.
Oct 10 10:07:55 compute-0 sudo[264219]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:55 compute-0 sudo[264369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:07:55 compute-0 sudo[264369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:55 compute-0 sudo[264369]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:55 compute-0 sudo[264394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:07:55 compute-0 sudo[264394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:55 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:56 compute-0 ceph-mon[73551]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.223804534 +0000 UTC m=+0.047780273 container create 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:07:56 compute-0 systemd[1]: Started libpod-conmon-0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7.scope.
Oct 10 10:07:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.2024218 +0000 UTC m=+0.026397629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.308924599 +0000 UTC m=+0.132900418 container init 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.317375473 +0000 UTC m=+0.141351212 container start 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.321179276 +0000 UTC m=+0.145155055 container attach 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:07:56 compute-0 adoring_wu[264477]: 167 167
Oct 10 10:07:56 compute-0 systemd[1]: libpod-0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7.scope: Deactivated successfully.
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.323733119 +0000 UTC m=+0.147708868 container died 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:07:56 compute-0 sudo[264480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:56 compute-0 sudo[264480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:56 compute-0 sudo[264480]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c80b56657afacbf9c1ea44285d5e93e91fa45f71cce29f6c1e76b2596ac7d50-merged.mount: Deactivated successfully.
Oct 10 10:07:56 compute-0 podman[264461]: 2025-10-10 10:07:56.375386117 +0000 UTC m=+0.199361856 container remove 0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:56 compute-0 systemd[1]: libpod-conmon-0dafa5f50054929e07d39958ff546ca7a3b8658b31213a070410f2c801c3ead7.scope: Deactivated successfully.
Oct 10 10:07:56 compute-0 podman[264525]: 2025-10-10 10:07:56.602685729 +0000 UTC m=+0.051936898 container create 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 10:07:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:56 compute-0 systemd[1]: Started libpod-conmon-29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5.scope.
Oct 10 10:07:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7342a8316a80689a34c2b793d09024382dd3c7b1f609b65f53795440dc3b502/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7342a8316a80689a34c2b793d09024382dd3c7b1f609b65f53795440dc3b502/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7342a8316a80689a34c2b793d09024382dd3c7b1f609b65f53795440dc3b502/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7342a8316a80689a34c2b793d09024382dd3c7b1f609b65f53795440dc3b502/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:07:56 compute-0 podman[264525]: 2025-10-10 10:07:56.583644171 +0000 UTC m=+0.032895360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:07:56 compute-0 podman[264525]: 2025-10-10 10:07:56.68862096 +0000 UTC m=+0.137872149 container init 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:07:56 compute-0 podman[264525]: 2025-10-10 10:07:56.704813145 +0000 UTC m=+0.154064324 container start 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:07:56 compute-0 podman[264525]: 2025-10-10 10:07:56.709695104 +0000 UTC m=+0.158946303 container attach 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:07:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:07:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:07:57 compute-0 lvm[264617]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:07:57 compute-0 lvm[264617]: VG ceph_vg0 finished
Oct 10 10:07:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:57] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:07:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:07:57] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 10 10:07:57 compute-0 ecstatic_elion[264542]: {}
Oct 10 10:07:57 compute-0 systemd[1]: libpod-29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5.scope: Deactivated successfully.
Oct 10 10:07:57 compute-0 podman[264525]: 2025-10-10 10:07:57.437371067 +0000 UTC m=+0.886622236 container died 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:07:57 compute-0 systemd[1]: libpod-29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5.scope: Consumed 1.158s CPU time.
Oct 10 10:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7342a8316a80689a34c2b793d09024382dd3c7b1f609b65f53795440dc3b502-merged.mount: Deactivated successfully.
Oct 10 10:07:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:57 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:57 compute-0 podman[264525]: 2025-10-10 10:07:57.481705397 +0000 UTC m=+0.930956566 container remove 29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:07:57 compute-0 systemd[1]: libpod-conmon-29b7729dbaf4e0472b328ed0f5918a20ac57e27157e1656793cb39d08150c0d5.scope: Deactivated successfully.
Oct 10 10:07:57 compute-0 sudo[264394]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:07:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.547371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877547406, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2128, "num_deletes": 251, "total_data_size": 4172778, "memory_usage": 4251240, "flush_reason": "Manual Compaction"}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 10 10:07:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:57 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877572489, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4069084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19957, "largest_seqno": 22083, "table_properties": {"data_size": 4059605, "index_size": 5973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19602, "raw_average_key_size": 20, "raw_value_size": 4040653, "raw_average_value_size": 4165, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090666, "oldest_key_time": 1760090666, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 25267 microseconds, and 9039 cpu microseconds.
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.572634) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4069084 bytes OK
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.572685) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574125) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574140) EVENT_LOG_v1 {"time_micros": 1760090877574134, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4164171, prev total WAL file size 4200696, number of live WAL files 2.
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3973KB)], [44(12MB)]
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877575420, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16934409, "oldest_snapshot_seqno": -1}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5418 keys, 14721268 bytes, temperature: kUnknown
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877647155, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14721268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14683003, "index_size": 23627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136630, "raw_average_key_size": 25, "raw_value_size": 14582872, "raw_average_value_size": 2691, "num_data_blocks": 976, "num_entries": 5418, "num_filter_entries": 5418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.647610) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14721268 bytes
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.652048) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.7 rd, 204.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 5938, records dropped: 520 output_compression: NoCompression
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.652069) EVENT_LOG_v1 {"time_micros": 1760090877652059, "job": 22, "event": "compaction_finished", "compaction_time_micros": 71835, "compaction_time_cpu_micros": 29576, "output_level": 6, "num_output_files": 1, "total_output_size": 14721268, "num_input_records": 5938, "num_output_records": 5418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877653021, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877655459, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.655626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.655637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.655640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.655643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:07:57.655646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-0 sudo[264633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:07:57 compute-0 sudo[264633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:57 compute-0 sudo[264633]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:57 compute-0 podman[264657]: 2025-10-10 10:07:57.766163175 +0000 UTC m=+0.079238895 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:07:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:57 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:58 compute-0 ceph-mon[73551]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:07:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:07:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:07:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:59 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:59 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:07:59 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:00 compute-0 ceph-mon[73551]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:08:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:00.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:08:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:01 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:01 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:01 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:02 compute-0 ceph-mon[73551]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:02.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:03 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:03 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:03 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:04 compute-0 ceph-mon[73551]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:04.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:04.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:05 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:05 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:05 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:06 compute-0 ceph-mon[73551]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:06.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:07.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:07.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:07.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:07] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:08:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:07] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:07 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:07 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:07 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:08 compute-0 ceph-mon[73551]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:08.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:09 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:09 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:09 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:10 compute-0 ceph-mon[73551]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:10.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:11 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:11 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:11 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:12 compute-0 ceph-mon[73551]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:12.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:08:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:08:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:13 compute-0 ceph-mon[73551]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:13 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:13 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:13 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:14 compute-0 podman[264694]: 2025-10-10 10:08:14.230099954 +0000 UTC m=+0.069201998 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 10 10:08:14 compute-0 podman[264695]: 2025-10-10 10:08:14.247881332 +0000 UTC m=+0.083429311 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:08:14 compute-0 podman[264696]: 2025-10-10 10:08:14.275119736 +0000 UTC m=+0.110956994 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:08:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:14.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:14.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:15 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:15 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:15 compute-0 ceph-mon[73551]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:15 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:08:16
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', '.nfs', 'default.rgw.log', 'images', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:08:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:08:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:16 compute-0 sudo[264762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:16 compute-0 sudo[264762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:16 compute-0 sudo[264762]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:08:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:08:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:16.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:17.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:08:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:08:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:08:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:17 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:17 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:17 compute-0 ceph-mon[73551]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:17 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:18 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 10:08:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:18.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:19 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:19 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:19 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:19.604 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:08:19 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:19.606 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:08:19 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:19.607 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:08:19 compute-0 ceph-mon[73551]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:19 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:20.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:20.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:21 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:21 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:21 compute-0 ceph-mon[73551]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:21 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:22.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:23 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:23 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:23 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:23 compute-0 ceph-mon[73551]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:08:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:24.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:08:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:25 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:25 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40038a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:25 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:25 compute-0 ceph-mon[73551]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:08:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:08:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:08:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:08:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:26.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:26.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:08:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:08:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:27.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:08:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:08:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:08:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:27 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:27 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:27 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40038c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:27 compute-0 ceph-mon[73551]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:28 compute-0 podman[264801]: 2025-10-10 10:08:28.207772177 +0000 UTC m=+0.056030624 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 10 10:08:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:28.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:28.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:29 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:29 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:29 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:30 compute-0 ceph-mon[73551]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:08:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:31 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e40038e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:31 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:31 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:32 compute-0 ceph-mon[73551]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:32.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:33 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:33 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:33 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:34 compute-0 ceph-mon[73551]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:34.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000067s ======
Oct 10 10:08:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:34.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Oct 10 10:08:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=cleanup t=2025-10-10T10:08:34.717856274Z level=info msg="Completed cleanup jobs" duration=6.93262ms
Oct 10 10:08:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugins.update.checker t=2025-10-10T10:08:34.848006293Z level=info msg="Update check succeeded" duration=47.074296ms
Oct 10 10:08:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana.update.checker t=2025-10-10T10:08:34.906265571Z level=info msg="Update check succeeded" duration=70.159684ms
Oct 10 10:08:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:35 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:35 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:35 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:36 compute-0 ceph-mon[73551]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:36 compute-0 sudo[264829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:36 compute-0 sudo[264829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:36 compute-0 sudo[264829]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:37.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:08:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:37] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:08:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:37] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 10 10:08:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:37 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:37 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:37 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:38 compute-0 ceph-mon[73551]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:38.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:39 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:40 compute-0 ceph-mon[73551]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:40.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:40.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:41 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:41 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:41.895 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:41.895 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:08:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:41 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89e4003960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:42 compute-0 ceph-mon[73551]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:42.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:43 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:43 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:43 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:44 compute-0 ceph-mon[73551]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:44.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:44.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:45 compute-0 podman[264865]: 2025-10-10 10:08:45.240475788 +0000 UTC m=+0.075674668 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:08:45 compute-0 podman[264867]: 2025-10-10 10:08:45.25496892 +0000 UTC m=+0.089234389 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:08:45 compute-0 podman[264866]: 2025-10-10 10:08:45.255674173 +0000 UTC m=+0.083826839 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:08:45 compute-0 ceph-mon[73551]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d8000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc003a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:45 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:08:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:08:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:46.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:47.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:08:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:47] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 10:08:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:47] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 10:08:47 compute-0 ceph-mon[73551]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:47 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:47 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d8001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:47 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d8001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3922966226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1968454522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2549355807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.669 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.669 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.669 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.669 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:08:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.693 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.694 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.694 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.694 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.694 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.694 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.695 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.695 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.695 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:48.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.719 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.720 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.720 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.720 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:08:48 compute-0 nova_compute[261329]: 2025-10-10 10:08:48.720 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:08:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:08:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040447484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.166 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.349 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.351 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.351 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.351 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.418 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.419 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:08:49 compute-0 ceph-mon[73551]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4155216318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2040447484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.446 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:08:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:49 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:49 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89fc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:08:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585741613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.910 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.916 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.935 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.938 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:08:49 compute-0 nova_compute[261329]: 2025-10-10 10:08:49.938 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:49 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d8001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:50 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1585741613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:50.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:50.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:51 compute-0 ceph-mon[73551]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[263757]: 10/10/2025 10:08:51 : epoch 68e8daeb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d8001920 fd 39 proxy ignored for local
Oct 10 10:08:51 compute-0 kernel: ganesha.nfsd[264794]: segfault at 50 ip 00007f8ab3a5832e sp 00007f8a7cff8210 error 4 in libntirpc.so.5.8[7f8ab3a3d000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 10 10:08:51 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:08:51 compute-0 systemd[1]: Started Process Core Dump (PID 264976/UID 0).
Oct 10 10:08:52 compute-0 systemd-coredump[264977]: Process 263761 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f8ab3a5832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:08:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:52.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:52 compute-0 systemd[1]: systemd-coredump@9-264976-0.service: Deactivated successfully.
Oct 10 10:08:52 compute-0 systemd[1]: systemd-coredump@9-264976-0.service: Consumed 1.079s CPU time.
Oct 10 10:08:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:52.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:52 compute-0 podman[264983]: 2025-10-10 10:08:52.747285594 +0000 UTC m=+0.027905929 container died f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:08:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c38c991bbdd92fdd227423338f523a43940844dd05dc41d41dd6d8ded1e10c5-merged.mount: Deactivated successfully.
Oct 10 10:08:52 compute-0 podman[264983]: 2025-10-10 10:08:52.788311178 +0000 UTC m=+0.068931493 container remove f71b17c5038e60c63b8880e7a2ed1cdbbef9502a2f017ad9eb312fbf20f106fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:08:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:08:52 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:08:52 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:08:52 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.512s CPU time.
Oct 10 10:08:53 compute-0 ceph-mon[73551]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:08:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:54.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:55 compute-0 ceph-mon[73551]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:56 compute-0 sudo[265031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:56 compute-0 sudo[265031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:56 compute-0 sudo[265031]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:56.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:56.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:08:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:08:57.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:08:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:57] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 10:08:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:08:57] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 10 10:08:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100857 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:08:57 compute-0 ceph-mon[73551]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:57 compute-0 sudo[265057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:08:57 compute-0 sudo[265057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:57 compute-0 sudo[265057]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:58 compute-0 sudo[265082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:08:58 compute-0 sudo[265082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:58 compute-0 sudo[265082]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 10:08:58 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:08:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:08:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:58.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:08:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:08:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:08:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:58.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:08:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:08:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:59 compute-0 podman[265142]: 2025-10-10 10:08:59.21796314 +0000 UTC m=+0.058453105 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 10 10:08:59 compute-0 ceph-mon[73551]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:09:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:09:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:00.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:09:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:09:00 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:09:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:09:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:01 compute-0 sudo[265166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:09:01 compute-0 sudo[265166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:01 compute-0 sudo[265166]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:02 compute-0 sudo[265191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:09:02 compute-0 sudo[265191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:09:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.541647378 +0000 UTC m=+0.040877501 container create 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:09:02 compute-0 systemd[1]: Started libpod-conmon-3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff.scope.
Oct 10 10:09:02 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.525696046 +0000 UTC m=+0.024926199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.623657494 +0000 UTC m=+0.122887647 container init 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.629370604 +0000 UTC m=+0.128600727 container start 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.632131656 +0000 UTC m=+0.131361779 container attach 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 10:09:02 compute-0 heuristic_mahavira[265276]: 167 167
Oct 10 10:09:02 compute-0 systemd[1]: libpod-3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff.scope: Deactivated successfully.
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.634996372 +0000 UTC m=+0.134226495 container died 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd49588b0c767652be499c06fa7eb521f0190f54390061067a60d033e51520f7-merged.mount: Deactivated successfully.
Oct 10 10:09:02 compute-0 podman[265259]: 2025-10-10 10:09:02.671011539 +0000 UTC m=+0.170241662 container remove 3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mahavira, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:09:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:02 compute-0 systemd[1]: libpod-conmon-3a6b603b302fffcfdd85aa2e7875989073a6462185e91b36c9cf56b595fa22ff.scope: Deactivated successfully.
Oct 10 10:09:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:02.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:09:02 compute-0 podman[265302]: 2025-10-10 10:09:02.830440971 +0000 UTC m=+0.045789483 container create c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:09:02 compute-0 systemd[1]: Started libpod-conmon-c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335.scope.
Oct 10 10:09:02 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:02 compute-0 podman[265302]: 2025-10-10 10:09:02.809710142 +0000 UTC m=+0.025058694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:02 compute-0 podman[265302]: 2025-10-10 10:09:02.92449608 +0000 UTC m=+0.139844682 container init c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:09:02 compute-0 podman[265302]: 2025-10-10 10:09:02.93052306 +0000 UTC m=+0.145871582 container start c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:09:02 compute-0 podman[265302]: 2025-10-10 10:09:02.933805649 +0000 UTC m=+0.149154251 container attach c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 10:09:03 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 10.
Oct 10 10:09:03 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:09:03 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.512s CPU time.
Oct 10 10:09:03 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:09:03 compute-0 unruffled_moser[265319]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:09:03 compute-0 unruffled_moser[265319]: --> All data devices are unavailable
Oct 10 10:09:03 compute-0 systemd[1]: libpod-c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335.scope: Deactivated successfully.
Oct 10 10:09:03 compute-0 podman[265302]: 2025-10-10 10:09:03.323798749 +0000 UTC m=+0.539147281 container died c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:09:03 compute-0 PackageKit[191551]: daemon quit
Oct 10 10:09:03 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 10:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0ea1b1d5e7650a1a4a5338c320293c1af243df48638d253cecc469b0c060f5e-merged.mount: Deactivated successfully.
Oct 10 10:09:03 compute-0 podman[265386]: 2025-10-10 10:09:03.415671964 +0000 UTC m=+0.089268079 container create 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 10:09:03 compute-0 ceph-mon[73551]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:09:03 compute-0 podman[265302]: 2025-10-10 10:09:03.476458357 +0000 UTC m=+0.691806879 container remove c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_moser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:03 compute-0 podman[265386]: 2025-10-10 10:09:03.391725309 +0000 UTC m=+0.065321444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:03 compute-0 systemd[1]: libpod-conmon-c342f9605cfe6cb6498d9e0fdca6ace9f32bcf518af667d9e535bc58707fc335.scope: Deactivated successfully.
Oct 10 10:09:03 compute-0 sudo[265191]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26212382626c7e0f78bd16ad4513c5f571e2e90092addaed371a48345f7509f5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26212382626c7e0f78bd16ad4513c5f571e2e90092addaed371a48345f7509f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26212382626c7e0f78bd16ad4513c5f571e2e90092addaed371a48345f7509f5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26212382626c7e0f78bd16ad4513c5f571e2e90092addaed371a48345f7509f5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:03 compute-0 podman[265386]: 2025-10-10 10:09:03.551906226 +0000 UTC m=+0.225502351 container init 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:09:03 compute-0 podman[265386]: 2025-10-10 10:09:03.562207698 +0000 UTC m=+0.235803813 container start 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 10:09:03 compute-0 bash[265386]: 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:09:03 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:09:03 compute-0 sudo[265416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:09:03 compute-0 sudo[265416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:03 compute-0 sudo[265416]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:09:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:09:03 compute-0 sudo[265465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:09:03 compute-0 sudo[265465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.07853095 +0000 UTC m=+0.041961506 container create d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:09:04 compute-0 systemd[1]: Started libpod-conmon-d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc.scope.
Oct 10 10:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.156363238 +0000 UTC m=+0.119793804 container init d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.064068629 +0000 UTC m=+0.027499205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.163498125 +0000 UTC m=+0.126928681 container start d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.166374651 +0000 UTC m=+0.129805227 container attach d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:09:04 compute-0 modest_cray[265565]: 167 167
Oct 10 10:09:04 compute-0 systemd[1]: libpod-d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc.scope: Deactivated successfully.
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.168487651 +0000 UTC m=+0.131918207 container died d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c42436654d091796fdbf70fe4dc3a8b13a828c95179d12126662636b76b092ac-merged.mount: Deactivated successfully.
Oct 10 10:09:04 compute-0 podman[265547]: 2025-10-10 10:09:04.205303296 +0000 UTC m=+0.168733862 container remove d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_cray, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:04 compute-0 systemd[1]: libpod-conmon-d899de6899797f7e850271227a91cb6e77cd9b7aac1bc3caf033950366c1e5dc.scope: Deactivated successfully.
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.406916071 +0000 UTC m=+0.049817848 container create de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:04 compute-0 systemd[1]: Started libpod-conmon-de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217.scope.
Oct 10 10:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0658060f61ec4600679a65fa6592535e1391c412a8ee290013bf728e45e9123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0658060f61ec4600679a65fa6592535e1391c412a8ee290013bf728e45e9123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0658060f61ec4600679a65fa6592535e1391c412a8ee290013bf728e45e9123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0658060f61ec4600679a65fa6592535e1391c412a8ee290013bf728e45e9123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.386542774 +0000 UTC m=+0.029444601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.489896491 +0000 UTC m=+0.132798278 container init de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.497900007 +0000 UTC m=+0.140801784 container start de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.502391656 +0000 UTC m=+0.145293483 container attach de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:04.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:04 compute-0 charming_rhodes[265608]: {
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:     "0": [
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:         {
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "devices": [
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "/dev/loop3"
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             ],
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "lv_name": "ceph_lv0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "lv_size": "21470642176",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "name": "ceph_lv0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "tags": {
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.cluster_name": "ceph",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.crush_device_class": "",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.encrypted": "0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.osd_id": "0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.type": "block",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.vdo": "0",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:                 "ceph.with_tpm": "0"
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             },
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "type": "block",
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:             "vg_name": "ceph_vg0"
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:         }
Oct 10 10:09:04 compute-0 charming_rhodes[265608]:     ]
Oct 10 10:09:04 compute-0 charming_rhodes[265608]: }
Oct 10 10:09:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:04 compute-0 systemd[1]: libpod-de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217.scope: Deactivated successfully.
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.818602212 +0000 UTC m=+0.461503989 container died de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0658060f61ec4600679a65fa6592535e1391c412a8ee290013bf728e45e9123-merged.mount: Deactivated successfully.
Oct 10 10:09:04 compute-0 podman[265591]: 2025-10-10 10:09:04.861667835 +0000 UTC m=+0.504569612 container remove de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_rhodes, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:09:04 compute-0 systemd[1]: libpod-conmon-de821f6b810f4a9eda51a6a9eddb73e4735ee244d6d02db397a8aa11537e2217.scope: Deactivated successfully.
Oct 10 10:09:04 compute-0 sudo[265465]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:04 compute-0 sudo[265630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:09:04 compute-0 sudo[265630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:04 compute-0 sudo[265630]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:05 compute-0 sudo[265655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:09:05 compute-0 sudo[265655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.418150522 +0000 UTC m=+0.041315195 container create d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:05 compute-0 systemd[1]: Started libpod-conmon-d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8.scope.
Oct 10 10:09:05 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.477078821 +0000 UTC m=+0.100243514 container init d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.489655069 +0000 UTC m=+0.112819742 container start d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:09:05 compute-0 compassionate_chebyshev[265738]: 167 167
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.493300391 +0000 UTC m=+0.116465094 container attach d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:09:05 compute-0 systemd[1]: libpod-d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8.scope: Deactivated successfully.
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.39916388 +0000 UTC m=+0.022328573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.494937786 +0000 UTC m=+0.118102469 container died d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-26008e1fc50ab72f0f54ad487f243a70ece2c2f99b477354424956c465d7074e-merged.mount: Deactivated successfully.
Oct 10 10:09:05 compute-0 podman[265722]: 2025-10-10 10:09:05.528693679 +0000 UTC m=+0.151858352 container remove d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:09:05 compute-0 systemd[1]: libpod-conmon-d7cad268130b64f478282852f14755dbaf290cca0b4b477cf679e5e193cbe4e8.scope: Deactivated successfully.
Oct 10 10:09:05 compute-0 podman[265761]: 2025-10-10 10:09:05.68449528 +0000 UTC m=+0.043800118 container create cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:09:05 compute-0 systemd[1]: Started libpod-conmon-cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794.scope.
Oct 10 10:09:05 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557e774f44c253da91eec067e6a0c6e59c9edf83541d843a2469008110055a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557e774f44c253da91eec067e6a0c6e59c9edf83541d843a2469008110055a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557e774f44c253da91eec067e6a0c6e59c9edf83541d843a2469008110055a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557e774f44c253da91eec067e6a0c6e59c9edf83541d843a2469008110055a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:05 compute-0 podman[265761]: 2025-10-10 10:09:05.749804822 +0000 UTC m=+0.109109680 container init cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:09:05 compute-0 podman[265761]: 2025-10-10 10:09:05.757637722 +0000 UTC m=+0.116942560 container start cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 10:09:05 compute-0 podman[265761]: 2025-10-10 10:09:05.665423796 +0000 UTC m=+0.024728644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:05 compute-0 podman[265761]: 2025-10-10 10:09:05.760928992 +0000 UTC m=+0.120233910 container attach cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 10 10:09:05 compute-0 ceph-mon[73551]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:06 compute-0 lvm[265852]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:09:06 compute-0 lvm[265852]: VG ceph_vg0 finished
Oct 10 10:09:06 compute-0 awesome_leakey[265777]: {}
Oct 10 10:09:06 compute-0 systemd[1]: libpod-cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794.scope: Deactivated successfully.
Oct 10 10:09:06 compute-0 systemd[1]: libpod-cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794.scope: Consumed 1.233s CPU time.
Oct 10 10:09:06 compute-0 podman[265761]: 2025-10-10 10:09:06.482703196 +0000 UTC m=+0.842008034 container died cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8557e774f44c253da91eec067e6a0c6e59c9edf83541d843a2469008110055a4-merged.mount: Deactivated successfully.
Oct 10 10:09:06 compute-0 podman[265761]: 2025-10-10 10:09:06.547664197 +0000 UTC m=+0.906969045 container remove cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_leakey, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:09:06 compute-0 systemd[1]: libpod-conmon-cfe9af64f346281d7f8c193f71e5e42467cce2714f1fd5d2e38b01fa0cb07794.scope: Deactivated successfully.
Oct 10 10:09:06 compute-0 sudo[265655]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:09:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:09:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:06.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:06 compute-0 sudo[265868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:09:06 compute-0 sudo[265868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:06 compute-0 sudo[265868]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:07.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:07] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:09:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:07] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 10 10:09:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:07 compute-0 ceph-mon[73551]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:09:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:09:09 compute-0 ceph-mon[73551]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:10 compute-0 unix_chkpwd[265899]: password check failed for user (root)
Oct 10 10:09:10 compute-0 sshd-session[265896]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:10.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:11 compute-0 ceph-mon[73551]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:12 compute-0 sshd-session[265896]: Failed password for root from 80.94.93.119 port 42744 ssh2
Oct 10 10:09:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:12.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:12.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:13 compute-0 unix_chkpwd[265903]: password check failed for user (root)
Oct 10 10:09:13 compute-0 ceph-mon[73551]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:14.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:14.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:15 compute-0 sshd-session[265896]: Failed password for root from 80.94.93.119 port 42744 ssh2
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:09:15 compute-0 ceph-mon[73551]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:16 compute-0 podman[265924]: 2025-10-10 10:09:16.225826636 +0000 UTC m=+0.054329439 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 10:09:16 compute-0 podman[265923]: 2025-10-10 10:09:16.234455132 +0000 UTC m=+0.065836220 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:09:16 compute-0 podman[265925]: 2025-10-10 10:09:16.258290975 +0000 UTC m=+0.083740006 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:09:16 compute-0 unix_chkpwd[265987]: password check failed for user (root)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:09:16
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', '.nfs', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log']
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:09:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:09:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:09:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:16.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:16 compute-0 sudo[265988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:16 compute-0 sudo[265988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:16 compute-0 sudo[265988]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:17.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:09:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:17] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:09:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:17 compute-0 ceph-mon[73551]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:18 compute-0 sshd-session[265896]: Failed password for root from 80.94.93.119 port 42744 ssh2
Oct 10 10:09:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 10 10:09:19 compute-0 sshd-session[265896]: Received disconnect from 80.94.93.119 port 42744:11:  [preauth]
Oct 10 10:09:19 compute-0 sshd-session[265896]: Disconnected from authenticating user root 80.94.93.119 port 42744 [preauth]
Oct 10 10:09:19 compute-0 sshd-session[265896]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100919 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:09:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:19 compute-0 unix_chkpwd[266018]: password check failed for user (root)
Oct 10 10:09:19 compute-0 sshd-session[266016]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:20 compute-0 ceph-mon[73551]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 10 10:09:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:20.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:21 compute-0 sshd-session[266016]: Failed password for root from 80.94.93.119 port 19288 ssh2
Oct 10 10:09:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:22 compute-0 ceph-mon[73551]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:22 compute-0 unix_chkpwd[266022]: password check failed for user (root)
Oct 10 10:09:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:24 compute-0 ceph-mon[73551]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:24 compute-0 sshd-session[266016]: Failed password for root from 80.94.93.119 port 19288 ssh2
Oct 10 10:09:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:25 compute-0 unix_chkpwd[266026]: password check failed for user (root)
Oct 10 10:09:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:26 compute-0 ceph-mon[73551]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:09:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:09:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:09:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:09:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100926 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:09:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:26.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:09:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:27.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:27.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:27 compute-0 sshd-session[266016]: Failed password for root from 80.94.93.119 port 19288 ssh2
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:09:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:27] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790002660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:28 compute-0 ceph-mon[73551]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:28 compute-0 sshd-session[266016]: Received disconnect from 80.94.93.119 port 19288:11:  [preauth]
Oct 10 10:09:28 compute-0 sshd-session[266016]: Disconnected from authenticating user root 80.94.93.119 port 19288 [preauth]
Oct 10 10:09:28 compute-0 sshd-session[266016]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:28.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:29 compute-0 unix_chkpwd[266033]: password check failed for user (root)
Oct 10 10:09:29 compute-0 sshd-session[266030]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:30 compute-0 ceph-mon[73551]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:30 compute-0 podman[266035]: 2025-10-10 10:09:30.253728255 +0000 UTC m=+0.084316108 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:09:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:30.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:09:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:31 compute-0 sshd-session[266030]: Failed password for root from 80.94.93.119 port 26934 ssh2
Oct 10 10:09:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:31 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:31 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:31 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:32 compute-0 unix_chkpwd[266055]: password check failed for user (root)
Oct 10 10:09:32 compute-0 ceph-mon[73551]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:32.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:32 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:32.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:33 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:33 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:33 compute-0 sshd-session[266030]: Failed password for root from 80.94.93.119 port 26934 ssh2
Oct 10 10:09:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:33 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:34 compute-0 ceph-mon[73551]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:34.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:34 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:34 compute-0 unix_chkpwd[266060]: password check failed for user (root)
Oct 10 10:09:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:09:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:36 compute-0 ceph-mon[73551]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:36 compute-0 sshd-session[266030]: Failed password for root from 80.94.93.119 port 26934 ssh2
Oct 10 10:09:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:09:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:09:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:36 compute-0 sudo[266063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:36 compute-0 sudo[266063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:36 compute-0 sudo[266063]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:37.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:37] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 10 10:09:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:37] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 10 10:09:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:37 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:37 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:37 compute-0 sshd-session[266030]: Received disconnect from 80.94.93.119 port 26934:11:  [preauth]
Oct 10 10:09:37 compute-0 sshd-session[266030]: Disconnected from authenticating user root 80.94.93.119 port 26934 [preauth]
Oct 10 10:09:37 compute-0 sshd-session[266030]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:09:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:37 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:38 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:09:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:38 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:09:38 compute-0 ceph-mon[73551]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:38.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:38 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:38.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:39 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:39 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:39 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:40 compute-0 ceph-mon[73551]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:40.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:40.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:09:41 compute-0 ceph-mon[73551]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:09:41.895 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:09:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:09:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:09:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:43 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:43 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:43 compute-0 ceph-mon[73551]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:09:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:43 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.184140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984184209, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1181, "num_deletes": 250, "total_data_size": 2133696, "memory_usage": 2169544, "flush_reason": "Manual Compaction"}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984197628, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1331552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22084, "largest_seqno": 23264, "table_properties": {"data_size": 1327081, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11513, "raw_average_key_size": 20, "raw_value_size": 1317447, "raw_average_value_size": 2340, "num_data_blocks": 86, "num_entries": 563, "num_filter_entries": 563, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090877, "oldest_key_time": 1760090877, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13547 microseconds, and 7856 cpu microseconds.
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.197695) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1331552 bytes OK
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.197728) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.202034) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.202058) EVENT_LOG_v1 {"time_micros": 1760090984202050, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.202084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2128455, prev total WAL file size 2128455, number of live WAL files 2.
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.203467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1300KB)], [47(14MB)]
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984203523, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16052820, "oldest_snapshot_seqno": -1}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5510 keys, 12707492 bytes, temperature: kUnknown
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984284043, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12707492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12671733, "index_size": 20865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 138842, "raw_average_key_size": 25, "raw_value_size": 12573178, "raw_average_value_size": 2281, "num_data_blocks": 855, "num_entries": 5510, "num_filter_entries": 5510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.284392) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12707492 bytes
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.286912) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.2 rd, 157.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(21.6) write-amplify(9.5) OK, records in: 5981, records dropped: 471 output_compression: NoCompression
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.286966) EVENT_LOG_v1 {"time_micros": 1760090984286948, "job": 24, "event": "compaction_finished", "compaction_time_micros": 80602, "compaction_time_cpu_micros": 49155, "output_level": 6, "num_output_files": 1, "total_output_size": 12707492, "num_input_records": 5981, "num_output_records": 5510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984287636, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984291125, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.203382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.291251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.291260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.291262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.291265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:09:44.291267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:44.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:44 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:44.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:45 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:45 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:45 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:46 compute-0 ceph-mon[73551]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:09:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=404 latency=0.002000068s ======
Oct 10 10:09:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:46.569 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000068s
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - - [10/Oct/2025:10:09:46.586 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct 10 10:09:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100946 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:46.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:46 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:47.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:47.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:47.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:47 compute-0 podman[266098]: 2025-10-10 10:09:47.243202239 +0000 UTC m=+0.088403513 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:09:47 compute-0 podman[266100]: 2025-10-10 10:09:47.256144641 +0000 UTC m=+0.094963672 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:09:47 compute-0 podman[266099]: 2025-10-10 10:09:47.276418535 +0000 UTC m=+0.108594386 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:09:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:47] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.502 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.522 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.523 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.523 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.539 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.540 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.540 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.540 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.565 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.565 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.566 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.566 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:09:47 compute-0 nova_compute[261329]: 2025-10-10 10:09:47.566 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a0002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:09:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177719747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.003 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.159 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.160 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.160 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.160 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.216 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.216 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:09:48 compute-0 ceph-mon[73551]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/177719747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.233 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:09:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:09:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916280922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.760 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:09:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:48.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:48 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:48.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.767 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.784 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.786 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:09:48 compute-0 nova_compute[261329]: 2025-10-10 10:09:48.786 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2615600125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2916280922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:49 compute-0 nova_compute[261329]: 2025-10-10 10:09:49.483 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:49 compute-0 nova_compute[261329]: 2025-10-10 10:09:49.484 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:49 compute-0 nova_compute[261329]: 2025-10-10 10:09:49.484 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:49 compute-0 nova_compute[261329]: 2025-10-10 10:09:49.485 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:49 compute-0 nova_compute[261329]: 2025-10-10 10:09:49.485 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:09:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:49 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:49 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:49 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:50 compute-0 nova_compute[261329]: 2025-10-10 10:09:50.235 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-0 ceph-mon[73551]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:50 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4222935933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:50.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beb265d0 =====
Oct 10 10:09:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beb265d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:50 compute-0 radosgw[95218]: beast: 0x7f96beb265d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:50.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 10 10:09:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2285751282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:51 compute-0 ceph-mon[73551]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2966347030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 10 10:09:51 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 10 10:09:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:51 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a0002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:51 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:51 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 10 10:09:52 compute-0 ceph-mon[73551]: osdmap e144: 3 total, 3 up, 3 in
Oct 10 10:09:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 10 10:09:52 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 10 10:09:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:09:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:09:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:52.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.0 MiB/s wr, 10 op/s
Oct 10 10:09:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 10 10:09:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 10 10:09:53 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 10 10:09:53 compute-0 ceph-mon[73551]: osdmap e145: 3 total, 3 up, 3 in
Oct 10 10:09:53 compute-0 ceph-mon[73551]: pgmap v712: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.0 MiB/s wr, 10 op/s
Oct 10 10:09:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a0002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 10 10:09:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-0 ceph-mon[73551]: osdmap e146: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-0 ceph-mon[73551]: osdmap e147: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:54.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 20 op/s
Oct 10 10:09:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 10 10:09:55 compute-0 ceph-mon[73551]: pgmap v715: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 20 op/s
Oct 10 10:09:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 10 10:09:55 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 10 10:09:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:55 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:55 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:55 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a0002010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:56 compute-0 ceph-mon[73551]: osdmap e148: 3 total, 3 up, 3 in
Oct 10 10:09:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/100956 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:09:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:56.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 10 10:09:56 compute-0 sudo[266220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:56 compute-0 sudo[266220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:56 compute-0 sudo[266220]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:09:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:09:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:09:57] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 10 10:09:57 compute-0 ceph-mon[73551]: pgmap v717: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:57 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:57 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:57 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:09:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Oct 10 10:09:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:59 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a00091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:59 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:59 compute-0 ceph-mon[73551]: pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Oct 10 10:09:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:09:59 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 10:10:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:00.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.4 MiB/s wr, 40 op/s
Oct 10 10:10:00 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 10:10:01 compute-0 podman[266249]: 2025-10-10 10:10:01.254428436 +0000 UTC m=+0.091295870 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:10:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:10:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:01 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a00091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:01 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:01 compute-0 ceph-mon[73551]: pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.4 MiB/s wr, 40 op/s
Oct 10 10:10:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:01 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:10:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:10:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:02.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 36 op/s
Oct 10 10:10:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a00091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:03 compute-0 ceph-mon[73551]: pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 36 op/s
Oct 10 10:10:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:04.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:04.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Oct 10 10:10:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:10:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:05 compute-0 ceph-mon[73551]: pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Oct 10 10:10:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a00091b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:06.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:06.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 MiB/s wr, 27 op/s
Oct 10 10:10:07 compute-0 sudo[266275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:07 compute-0 sudo[266275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:07 compute-0 sudo[266275]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:07 compute-0 sudo[266300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:10:07 compute-0 sudo[266300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:07.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:07] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Oct 10 10:10:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:07] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Oct 10 10:10:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:07 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:07 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:07 compute-0 podman[266397]: 2025-10-10 10:10:07.726629616 +0000 UTC m=+0.057016900 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 10:10:07 compute-0 podman[266397]: 2025-10-10 10:10:07.839642367 +0000 UTC m=+0.170029631 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:10:07 compute-0 ceph-mon[73551]: pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 MiB/s wr, 27 op/s
Oct 10 10:10:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:07 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:08 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:10:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:08 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:10:08 compute-0 podman[266516]: 2025-10-10 10:10:08.398984256 +0000 UTC m=+0.073331571 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:08 compute-0 podman[266516]: 2025-10-10 10:10:08.409614171 +0000 UTC m=+0.083961476 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:08.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:08 compute-0 podman[266608]: 2025-10-10 10:10:08.79290318 +0000 UTC m=+0.067614752 container exec 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:10:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 27 op/s
Oct 10 10:10:08 compute-0 podman[266608]: 2025-10-10 10:10:08.833802681 +0000 UTC m=+0.108514243 container exec_died 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:10:09 compute-0 podman[266671]: 2025-10-10 10:10:09.078111913 +0000 UTC m=+0.060960700 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:10:09 compute-0 podman[266671]: 2025-10-10 10:10:09.089663348 +0000 UTC m=+0.072512115 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:10:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:09 compute-0 podman[266736]: 2025-10-10 10:10:09.326920516 +0000 UTC m=+0.064543660 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 10:10:09 compute-0 podman[266736]: 2025-10-10 10:10:09.341230912 +0000 UTC m=+0.078854036 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 10 10:10:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:09 compute-0 podman[266801]: 2025-10-10 10:10:09.580740405 +0000 UTC m=+0.062878714 container exec e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:09 compute-0 podman[266801]: 2025-10-10 10:10:09.626780178 +0000 UTC m=+0.108918467 container exec_died e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:09 compute-0 podman[266874]: 2025-10-10 10:10:09.855142119 +0000 UTC m=+0.062680607 container exec 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:10:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:09 compute-0 ceph-mon[73551]: pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 27 op/s
Oct 10 10:10:10 compute-0 podman[266874]: 2025-10-10 10:10:10.051720053 +0000 UTC m=+0.259258541 container exec_died 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:10:10 compute-0 podman[266987]: 2025-10-10 10:10:10.437287277 +0000 UTC m=+0.049780598 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:10 compute-0 podman[266987]: 2025-10-10 10:10:10.467475533 +0000 UTC m=+0.079968834 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:10:10 compute-0 sudo[266300]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:10:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:10:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:10 compute-0 sudo[267030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:10 compute-0 sudo[267030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:10 compute-0 sudo[267030]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:10 compute-0 sudo[267055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:10:10 compute-0 sudo[267055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:10:11 compute-0 sudo[267055]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-0 sudo[267110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:11 compute-0 sudo[267110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:11 compute-0 sudo[267110]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:11 compute-0 sudo[267135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:10:11 compute-0 sudo[267135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.838352527 +0000 UTC m=+0.060217226 container create bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 10:10:11 compute-0 systemd[1]: Started libpod-conmon-bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4.scope.
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.817229513 +0000 UTC m=+0.039094212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:11 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.93249805 +0000 UTC m=+0.154362749 container init bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.944898363 +0000 UTC m=+0.166763042 container start bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.948282356 +0000 UTC m=+0.170147055 container attach bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:11 compute-0 systemd[1]: libpod-bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4.scope: Deactivated successfully.
Oct 10 10:10:11 compute-0 focused_bouman[267218]: 167 167
Oct 10 10:10:11 compute-0 conmon[267218]: conmon bbc7e581575c864d5dba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4.scope/container/memory.events
Oct 10 10:10:11 compute-0 podman[267202]: 2025-10-10 10:10:11.951843154 +0000 UTC m=+0.173707863 container died bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:10:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-539f35d7850b543856c2402d6e0c065f685625d51bafc61fb4700471137cd2b7-merged.mount: Deactivated successfully.
Oct 10 10:10:12 compute-0 podman[267202]: 2025-10-10 10:10:12.001720055 +0000 UTC m=+0.223584744 container remove bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bouman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:10:12 compute-0 systemd[1]: libpod-conmon-bbc7e581575c864d5dbaff77ca686cebba665f08d4c364620d348c515dc4c6e4.scope: Deactivated successfully.
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.170622597 +0000 UTC m=+0.042930900 container create a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:10:12 compute-0 systemd[1]: Started libpod-conmon-a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72.scope.
Oct 10 10:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.151747699 +0000 UTC m=+0.024056052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.250724004 +0000 UTC m=+0.123032307 container init a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.257955384 +0000 UTC m=+0.130263687 container start a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.261499492 +0000 UTC m=+0.133807845 container attach a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:10:12 compute-0 happy_lamarr[267260]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:10:12 compute-0 happy_lamarr[267260]: --> All data devices are unavailable
Oct 10 10:10:12 compute-0 systemd[1]: libpod-a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72.scope: Deactivated successfully.
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.629965228 +0000 UTC m=+0.502273531 container died a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac83b117ab1e6e04d9f9c43728697f389ad225a0110142ecb48fde31b47e4c82-merged.mount: Deactivated successfully.
Oct 10 10:10:12 compute-0 podman[267244]: 2025-10-10 10:10:12.682203107 +0000 UTC m=+0.554511410 container remove a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:10:12 compute-0 systemd[1]: libpod-conmon-a7e00191a3fbb69f1affd5495d4a5cd21b43e6e40fa3620f586fce6fcfac6d72.scope: Deactivated successfully.
Oct 10 10:10:12 compute-0 sudo[267135]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:12 compute-0 sudo[267287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:12 compute-0 sudo[267287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:12 compute-0 sudo[267287]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:10:12 compute-0 sudo[267313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:10:12 compute-0 sudo[267313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.247432052 +0000 UTC m=+0.049799129 container create 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 10:10:13 compute-0 systemd[1]: Started libpod-conmon-0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463.scope.
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.223217426 +0000 UTC m=+0.025584493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:13 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.35462081 +0000 UTC m=+0.156987917 container init 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.361452518 +0000 UTC m=+0.163819615 container start 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.365591595 +0000 UTC m=+0.167958692 container attach 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 10:10:13 compute-0 gifted_chatelet[267395]: 167 167
Oct 10 10:10:13 compute-0 systemd[1]: libpod-0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463.scope: Deactivated successfully.
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.367853451 +0000 UTC m=+0.170220548 container died 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 10:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0da62232e17bc7a96babf27c6380a4a580ae33ec7417ba27a5ba66bd006a7f4-merged.mount: Deactivated successfully.
Oct 10 10:10:13 compute-0 podman[267378]: 2025-10-10 10:10:13.413111957 +0000 UTC m=+0.215479054 container remove 0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:10:13 compute-0 systemd[1]: libpod-conmon-0de6ea40392b3cfcd9f009eca2ffc2fce3cc7fc09cc8a2bdcd4242cec1674463.scope: Deactivated successfully.
Oct 10 10:10:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:13 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:13 compute-0 podman[267420]: 2025-10-10 10:10:13.610720275 +0000 UTC m=+0.050678048 container create 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:10:13 compute-0 systemd[1]: Started libpod-conmon-87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027.scope.
Oct 10 10:10:13 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:13 compute-0 podman[267420]: 2025-10-10 10:10:13.590160371 +0000 UTC m=+0.030118204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2a8f7bb58ce290adc310a4da7a3d881f9e4c4f753a5cdc049422196e2070b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:13 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2a8f7bb58ce290adc310a4da7a3d881f9e4c4f753a5cdc049422196e2070b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2a8f7bb58ce290adc310a4da7a3d881f9e4c4f753a5cdc049422196e2070b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2a8f7bb58ce290adc310a4da7a3d881f9e4c4f753a5cdc049422196e2070b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:13 compute-0 podman[267420]: 2025-10-10 10:10:13.709293717 +0000 UTC m=+0.149251510 container init 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:13 compute-0 podman[267420]: 2025-10-10 10:10:13.71778301 +0000 UTC m=+0.157740803 container start 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:13 compute-0 podman[267420]: 2025-10-10 10:10:13.721653088 +0000 UTC m=+0.161610921 container attach 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:10:13 compute-0 ceph-mon[73551]: pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:10:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:13 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]: {
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:     "0": [
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:         {
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "devices": [
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "/dev/loop3"
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             ],
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "lv_name": "ceph_lv0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "lv_size": "21470642176",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "name": "ceph_lv0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "tags": {
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.cluster_name": "ceph",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.crush_device_class": "",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.encrypted": "0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.osd_id": "0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.type": "block",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.vdo": "0",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:                 "ceph.with_tpm": "0"
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             },
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "type": "block",
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:             "vg_name": "ceph_vg0"
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:         }
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]:     ]
Oct 10 10:10:14 compute-0 trusting_keldysh[267436]: }
Oct 10 10:10:14 compute-0 systemd[1]: libpod-87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027.scope: Deactivated successfully.
Oct 10 10:10:14 compute-0 podman[267420]: 2025-10-10 10:10:14.084164886 +0000 UTC m=+0.524122649 container died 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 10:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d2a8f7bb58ce290adc310a4da7a3d881f9e4c4f753a5cdc049422196e2070b3-merged.mount: Deactivated successfully.
Oct 10 10:10:14 compute-0 podman[267420]: 2025-10-10 10:10:14.13028108 +0000 UTC m=+0.570238893 container remove 87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 10:10:14 compute-0 systemd[1]: libpod-conmon-87715750c4f52952a4682e3b5f088939586862bcbda2180bdbdbc9a682515027.scope: Deactivated successfully.
Oct 10 10:10:14 compute-0 sudo[267313]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:14 compute-0 sudo[267459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:14 compute-0 sudo[267459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:14 compute-0 sudo[267459]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:14 compute-0 sudo[267484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:10:14 compute-0 sudo[267484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:14 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:14.636 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:10:14 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:14.639 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.754387165 +0000 UTC m=+0.041643807 container create a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:10:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:14.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:14 compute-0 systemd[1]: Started libpod-conmon-a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6.scope.
Oct 10 10:10:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.831672189 +0000 UTC m=+0.118928881 container init a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.738785776 +0000 UTC m=+0.026042438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.838893169 +0000 UTC m=+0.126149821 container start a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.842363254 +0000 UTC m=+0.129620046 container attach a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:10:14 compute-0 beautiful_swirles[267566]: 167 167
Oct 10 10:10:14 compute-0 systemd[1]: libpod-a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6.scope: Deactivated successfully.
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.846585874 +0000 UTC m=+0.133842536 container died a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d15477f9a3d1f0096fa56722a4da0fbef48928bddd471276dd3f67834e6c7017-merged.mount: Deactivated successfully.
Oct 10 10:10:14 compute-0 podman[267549]: 2025-10-10 10:10:14.884805117 +0000 UTC m=+0.172061759 container remove a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:10:14 compute-0 systemd[1]: libpod-conmon-a7397b9fc4adb68e996507e3050d5facb6741bd905f6f1f10e8ffc89691368f6.scope: Deactivated successfully.
Oct 10 10:10:15 compute-0 podman[267591]: 2025-10-10 10:10:15.07085236 +0000 UTC m=+0.042898428 container create 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:10:15 compute-0 systemd[1]: Started libpod-conmon-5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce.scope.
Oct 10 10:10:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:10:15 compute-0 podman[267591]: 2025-10-10 10:10:15.051461434 +0000 UTC m=+0.023507532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518787a9efa33a44669b6a6a32eeb8f95f317c4b602ec41ef3ddf0b5a0f23e87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518787a9efa33a44669b6a6a32eeb8f95f317c4b602ec41ef3ddf0b5a0f23e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518787a9efa33a44669b6a6a32eeb8f95f317c4b602ec41ef3ddf0b5a0f23e87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518787a9efa33a44669b6a6a32eeb8f95f317c4b602ec41ef3ddf0b5a0f23e87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:15 compute-0 podman[267591]: 2025-10-10 10:10:15.162695587 +0000 UTC m=+0.134741735 container init 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:10:15 compute-0 podman[267591]: 2025-10-10 10:10:15.177726328 +0000 UTC m=+0.149772426 container start 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:10:15 compute-0 podman[267591]: 2025-10-10 10:10:15.181954849 +0000 UTC m=+0.154000977 container attach 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:10:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:15 compute-0 lvm[267682]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:10:15 compute-0 lvm[267682]: VG ceph_vg0 finished
Oct 10 10:10:15 compute-0 competent_perlman[267608]: {}
Oct 10 10:10:15 compute-0 ceph-mon[73551]: pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:15 compute-0 systemd[1]: libpod-5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce.scope: Deactivated successfully.
Oct 10 10:10:15 compute-0 systemd[1]: libpod-5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce.scope: Consumed 1.165s CPU time.
Oct 10 10:10:15 compute-0 podman[267686]: 2025-10-10 10:10:15.959980178 +0000 UTC m=+0.026608837 container died 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:10:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-518787a9efa33a44669b6a6a32eeb8f95f317c4b602ec41ef3ddf0b5a0f23e87-merged.mount: Deactivated successfully.
Oct 10 10:10:16 compute-0 podman[267686]: 2025-10-10 10:10:16.005372688 +0000 UTC m=+0.072001327 container remove 5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:10:16 compute-0 systemd[1]: libpod-conmon-5df52d55fb14feaaf45fa842e8cec7d4cad5b87f729a965fa75bb6786303ffce.scope: Deactivated successfully.
Oct 10 10:10:16 compute-0 sudo[267484]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:10:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:10:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:16 compute-0 sudo[267703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:10:16 compute-0 sudo[267703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:16 compute-0 sudo[267703]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:10:16
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.log', '.nfs', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', '.mgr', 'default.rgw.control']
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:10:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:10:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101016 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:10:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:16.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:17 compute-0 sudo[267729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:17 compute-0 sudo[267729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:17 compute-0 sudo[267729]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:17.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:17] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Oct 10 10:10:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:17] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Oct 10 10:10:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790002880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:18 compute-0 ceph-mon[73551]: pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:18 compute-0 podman[267756]: 2025-10-10 10:10:18.213215962 +0000 UTC m=+0.059526513 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:10:18 compute-0 podman[267757]: 2025-10-10 10:10:18.213845813 +0000 UTC m=+0.059713519 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 10:10:18 compute-0 podman[267758]: 2025-10-10 10:10:18.237105667 +0000 UTC m=+0.079877799 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:10:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:18.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:10:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.202670) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019202732, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 664, "num_deletes": 257, "total_data_size": 897935, "memory_usage": 911704, "flush_reason": "Manual Compaction"}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019210973, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 870498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23265, "largest_seqno": 23928, "table_properties": {"data_size": 867026, "index_size": 1316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7925, "raw_average_key_size": 18, "raw_value_size": 859782, "raw_average_value_size": 2008, "num_data_blocks": 58, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090985, "oldest_key_time": 1760090985, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 8334 microseconds, and 5249 cpu microseconds.
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.211010) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 870498 bytes OK
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.211031) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.213470) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.213482) EVENT_LOG_v1 {"time_micros": 1760091019213477, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.213497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 894398, prev total WAL file size 894398, number of live WAL files 2.
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.213942) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(850KB)], [50(12MB)]
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019213980, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13577990, "oldest_snapshot_seqno": -1}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5407 keys, 13422942 bytes, temperature: kUnknown
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019276547, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13422942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13386778, "index_size": 21526, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 137934, "raw_average_key_size": 25, "raw_value_size": 13288918, "raw_average_value_size": 2457, "num_data_blocks": 879, "num_entries": 5407, "num_filter_entries": 5407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.276977) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13422942 bytes
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283040) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.5 rd, 214.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.1 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(31.0) write-amplify(15.4) OK, records in: 5938, records dropped: 531 output_compression: NoCompression
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283121) EVENT_LOG_v1 {"time_micros": 1760091019283093, "job": 26, "event": "compaction_finished", "compaction_time_micros": 62709, "compaction_time_cpu_micros": 26060, "output_level": 6, "num_output_files": 1, "total_output_size": 13422942, "num_input_records": 5938, "num_output_records": 5407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019283719, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019289479, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.213881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.289628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.289638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.289641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.289644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:10:19.289647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:20 compute-0 ceph-mon[73551]: pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:10:20 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:20.640 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:20.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900031a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:22 compute-0 ceph-mon[73551]: pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:10:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:22.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:10:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:22.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900031a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:23 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:24 compute-0 ceph-mon[73551]: pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:24.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:24.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:25 compute-0 ceph-mon[73551]: pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:25 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2790003340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3994215839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:10:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3994215839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:10:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:26.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:27.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:27] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Oct 10 10:10:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:27] "GET /metrics HTTP/1.1" 200 48322 "" "Prometheus/2.51.0"
Oct 10 10:10:27 compute-0 ceph-mon[73551]: pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:27 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:28.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0038a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:29 compute-0 ceph-mon[73551]: pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:29 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:30.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:30.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:10:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:31 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:31 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:31 compute-0 ceph-mon[73551]: pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:32 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:32 compute-0 podman[267831]: 2025-10-10 10:10:32.243715309 +0000 UTC m=+0.084550945 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 10 10:10:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:32.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:32.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:10:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:33 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:33 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:33 compute-0 ceph-mon[73551]: pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:10:33 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3328175243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:34 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:34.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:35 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 10 10:10:35 compute-0 ceph-mon[73551]: pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 10 10:10:35 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 10 10:10:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:36 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 10 10:10:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 10 10:10:36 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 10 10:10:36 compute-0 ceph-mon[73551]: osdmap e149: 3 total, 3 up, 3 in
Oct 10 10:10:37 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 10 10:10:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:37.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:37 compute-0 sudo[267857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:37 compute-0 sudo[267857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:37 compute-0 sudo[267857]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:37] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Oct 10 10:10:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:37] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Oct 10 10:10:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:37 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:37 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:38 compute-0 ceph-mon[73551]: pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 10 10:10:38 compute-0 ceph-mon[73551]: osdmap e150: 3 total, 3 up, 3 in
Oct 10 10:10:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:38 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:38.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:38.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 10 10:10:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101039 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:10:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 10 10:10:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 10 10:10:39 compute-0 ceph-mon[73551]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 10 10:10:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:39 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:39 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:40 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:40 compute-0 ceph-mon[73551]: pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 10 10:10:40 compute-0 ceph-mon[73551]: osdmap e151: 3 total, 3 up, 3 in
Oct 10 10:10:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 56 op/s
Oct 10 10:10:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/809446276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:41 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1280200693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:41 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:10:41.896 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:42 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:42 compute-0 ceph-mon[73551]: pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 56 op/s
Oct 10 10:10:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:42.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 59 op/s
Oct 10 10:10:43 compute-0 ceph-mon[73551]: pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 59 op/s
Oct 10 10:10:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:43 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:43 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:44 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:44.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.264 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.265 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.265 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:10:45 compute-0 nova_compute[261329]: 2025-10-10 10:10:45.283 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:45 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:45 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:45 compute-0 ceph-mon[73551]: pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Oct 10 10:10:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:46 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:10:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:10:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:46.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 41 op/s
Oct 10 10:10:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:47.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:47] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Oct 10 10:10:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:47] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Oct 10 10:10:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:47 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:10:47 compute-0 ceph-mon[73551]: pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 41 op/s
Oct 10 10:10:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:48 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.304 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.304 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.305 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.318 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.319 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.319 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.319 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.340 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.340 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.341 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.341 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.341 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:10:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226504384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.784 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 10 10:10:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3226504384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.948 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.949 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4880MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.949 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:48 compute-0 nova_compute[261329]: 2025-10-10 10:10:48.949 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.099 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.100 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.163 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing inventories for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:10:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:49 compute-0 podman[267916]: 2025-10-10 10:10:49.213888464 +0000 UTC m=+0.057357271 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 10 10:10:49 compute-0 podman[267918]: 2025-10-10 10:10:49.245361866 +0000 UTC m=+0.079457099 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 10 10:10:49 compute-0 podman[267917]: 2025-10-10 10:10:49.253725006 +0000 UTC m=+0.089107253 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.296 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating ProviderTree inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.297 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.317 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing aggregate associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.339 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing trait associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.366 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:49 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:49 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:10:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553159324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.847 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.853 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.869 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.871 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:10:49 compute-0 nova_compute[261329]: 2025-10-10 10:10:49.871 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:49 compute-0 ceph-mon[73551]: pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 10 10:10:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2896383186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2553159324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:50 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.789 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.790 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.791 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.791 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.791 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:50 compute-0 nova_compute[261329]: 2025-10-10 10:10:50.792 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:10:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:50.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:50.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 10 10:10:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:50 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:10:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:50 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:10:50 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3253010054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:51 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:51 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:51 compute-0 ceph-mon[73551]: pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 10 10:10:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:52 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:52.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 82 op/s
Oct 10 10:10:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1333009736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c0032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:53 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:10:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:54 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:54 compute-0 ceph-mon[73551]: pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 82 op/s
Oct 10 10:10:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/547334843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:55 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:55 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:56 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:56 compute-0 ceph-mon[73551]: pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:56.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:10:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:10:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:10:57.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:10:57 compute-0 sudo[268014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:57 compute-0 sudo[268014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:57 compute-0 sudo[268014]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:57] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Oct 10 10:10:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:10:57] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Oct 10 10:10:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:57 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:57 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:57 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:10:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:58 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:58 compute-0 ceph-mon[73551]: pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:10:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:10:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:10:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 10 10:10:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101059 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:10:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:59 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:10:59 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:00 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:00 compute-0 ceph-mon[73551]: pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 10 10:11:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:00.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 584 KiB/s rd, 938 B/s wr, 23 op/s
Oct 10 10:11:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:11:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:01 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:01 compute-0 anacron[216710]: Job `cron.daily' started
Oct 10 10:11:01 compute-0 anacron[216710]: Job `cron.daily' terminated
Oct 10 10:11:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:01 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27a000a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:02 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:02 compute-0 ceph-mon[73551]: pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 584 KiB/s rd, 938 B/s wr, 23 op/s
Oct 10 10:11:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:11:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:11:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 10 10:11:03 compute-0 podman[268047]: 2025-10-10 10:11:03.223308106 +0000 UTC m=+0.065214233 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 10 10:11:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:03 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:04 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:04 compute-0 ceph-mon[73551]: pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 10 10:11:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:05 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27700010b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:06 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:06 compute-0 ceph-mon[73551]: pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:07.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:11:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:07] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:11:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:07] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:11:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:07 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:07 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:08 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27700010b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:08 compute-0 ceph-mon[73551]: pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:09 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:10 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:10 compute-0 ceph-mon[73551]: pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:11 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:12 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:12 compute-0 ceph-mon[73551]: pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:13 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:13 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:14 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:14 compute-0 ceph-mon[73551]: pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:15 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2770001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:16 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:16 compute-0 ceph-mon[73551]: pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:11:16
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'images', '.nfs', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.log']
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:11:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:11:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:16 compute-0 sudo[268081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:11:16 compute-0 sudo[268081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:16 compute-0 sudo[268081]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:16 compute-0 sudo[268106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:11:16 compute-0 sudo[268106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:11:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:16.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:11:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:16.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:11:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:17.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:11:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:17 compute-0 sudo[268156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:17 compute-0 sudo[268156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:17 compute-0 sudo[268156]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:17] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 10 10:11:17 compute-0 sudo[268106]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:17] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:11:17 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:11:17 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:17 compute-0 sudo[268190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:11:17 compute-0 sudo[268190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:17 compute-0 sudo[268190]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:17 compute-0 sudo[268215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:11:17 compute-0 sudo[268215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:17 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:17 compute-0 nova_compute[261329]: 2025-10-10 10:11:17.970 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:17 compute-0 nova_compute[261329]: 2025-10-10 10:11:17.970 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:17 compute-0 nova_compute[261329]: 2025-10-10 10:11:17.990 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:11:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:18 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.070 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.071 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.078 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.079 2 INFO nova.compute.claims [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Claim successful on node compute-0.ctlplane.example.com
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.115447312 +0000 UTC m=+0.049397924 container create 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:11:18 compute-0 systemd[1]: Started libpod-conmon-4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce.scope.
Oct 10 10:11:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.094048086 +0000 UTC m=+0.027998708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.191505858 +0000 UTC m=+0.125456470 container init 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.199909629 +0000 UTC m=+0.133860241 container start 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.20351042 +0000 UTC m=+0.137461022 container attach 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:11:18 compute-0 heuristic_mirzakhani[268297]: 167 167
Oct 10 10:11:18 compute-0 systemd[1]: libpod-4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce.scope: Deactivated successfully.
Oct 10 10:11:18 compute-0 conmon[268297]: conmon 4f7ca5f5ec6474138d22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce.scope/container/memory.events
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.206583002 +0000 UTC m=+0.140533604 container died 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.215 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:18 compute-0 ceph-mon[73551]: pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:11:18 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d081e8ebaf9a37e8ae0453ac2197285f01017ce0f36dcb3191dfd88826a8698-merged.mount: Deactivated successfully.
Oct 10 10:11:18 compute-0 podman[268280]: 2025-10-10 10:11:18.245073061 +0000 UTC m=+0.179023663 container remove 4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:11:18 compute-0 systemd[1]: libpod-conmon-4f7ca5f5ec6474138d227f307bd7168d17611bcd7c38c0c26135552ea05fb6ce.scope: Deactivated successfully.
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.403274755 +0000 UTC m=+0.039387769 container create 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:11:18 compute-0 systemd[1]: Started libpod-conmon-81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985.scope.
Oct 10 10:11:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.482529178 +0000 UTC m=+0.118642212 container init 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.386250005 +0000 UTC m=+0.022363039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.489372456 +0000 UTC m=+0.125485470 container start 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.492620775 +0000 UTC m=+0.128733789 container attach 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:11:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:18 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887558386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.661 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.669 2 DEBUG nova.compute.provider_tree [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.698 2 DEBUG nova.scheduler.client.report [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.732 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.734 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.809 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.810 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.838 2 INFO nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:11:18 compute-0 elastic_taussig[268360]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:11:18 compute-0 elastic_taussig[268360]: --> All data devices are unavailable
Oct 10 10:11:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.859 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:11:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 10 10:11:18 compute-0 systemd[1]: libpod-81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985.scope: Deactivated successfully.
Oct 10 10:11:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.871501346 +0000 UTC m=+0.507614390 container died 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 10:11:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:18.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4277175fc7a71323ebab5bb743d7bf4f6c2677aba2923831442dbdc041a24cb3-merged.mount: Deactivated successfully.
Oct 10 10:11:18 compute-0 podman[268344]: 2025-10-10 10:11:18.925892376 +0000 UTC m=+0.562005420 container remove 81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:11:18 compute-0 systemd[1]: libpod-conmon-81c41400a0d0f2a2f7290becf8c8c63de43686f3a57bc5a089c0a55dfd241985.scope: Deactivated successfully.
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.943 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.944 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.945 2 INFO nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Creating image(s)
Oct 10 10:11:18 compute-0 nova_compute[261329]: 2025-10-10 10:11:18.976 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:18 compute-0 sudo[268215]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.006 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.041 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.045 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.046 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:19 compute-0 sudo[268415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:11:19 compute-0 sudo[268415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:19 compute-0 sudo[268415]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:19 compute-0 sudo[268469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:11:19 compute-0 sudo[268469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3887558386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.253625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079253662, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 251, "total_data_size": 1125880, "memory_usage": 1149456, "flush_reason": "Manual Compaction"}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079261738, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1112996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23929, "largest_seqno": 24740, "table_properties": {"data_size": 1108996, "index_size": 1716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9395, "raw_average_key_size": 19, "raw_value_size": 1100701, "raw_average_value_size": 2307, "num_data_blocks": 77, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091020, "oldest_key_time": 1760091020, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8137 microseconds, and 3346 cpu microseconds.
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.261767) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1112996 bytes OK
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.261783) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.263501) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.263514) EVENT_LOG_v1 {"time_micros": 1760091079263510, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.263532) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1121901, prev total WAL file size 1121901, number of live WAL files 2.
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.264109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1086KB)], [53(12MB)]
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079264194, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14535938, "oldest_snapshot_seqno": -1}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5366 keys, 12453195 bytes, temperature: kUnknown
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079344894, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12453195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12418152, "index_size": 20533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137817, "raw_average_key_size": 25, "raw_value_size": 12321644, "raw_average_value_size": 2296, "num_data_blocks": 834, "num_entries": 5366, "num_filter_entries": 5366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.345192) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12453195 bytes
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.346463) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 154.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.8 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(24.2) write-amplify(11.2) OK, records in: 5884, records dropped: 518 output_compression: NoCompression
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.346484) EVENT_LOG_v1 {"time_micros": 1760091079346472, "job": 28, "event": "compaction_finished", "compaction_time_micros": 80768, "compaction_time_cpu_micros": 29391, "output_level": 6, "num_output_files": 1, "total_output_size": 12453195, "num_input_records": 5884, "num_output_records": 5366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079346775, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079349115, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.263947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.349211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.349218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.349220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.349222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:11:19.349223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.409 2 DEBUG nova.virt.libvirt.imagebackend [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image locations are: [{'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.511757834 +0000 UTC m=+0.043729334 container create 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:11:19 compute-0 systemd[1]: Started libpod-conmon-38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48.scope.
Oct 10 10:11:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.49368917 +0000 UTC m=+0.025660680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.60156074 +0000 UTC m=+0.133532270 container init 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:11:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f276c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.611739651 +0000 UTC m=+0.143711161 container start 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.615558578 +0000 UTC m=+0.147530188 container attach 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:19 compute-0 dreamy_chatelet[268555]: 167 167
Oct 10 10:11:19 compute-0 systemd[1]: libpod-38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48.scope: Deactivated successfully.
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.619272913 +0000 UTC m=+0.151244423 container died 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:11:19 compute-0 podman[268553]: 2025-10-10 10:11:19.626917919 +0000 UTC m=+0.068770153 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 10 10:11:19 compute-0 podman[268550]: 2025-10-10 10:11:19.629289858 +0000 UTC m=+0.071860746 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 10 10:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fee2b10563f83ebd57d9dffbed27a4e5ea0977bb5a9ae2b4020c57ab59e54eee-merged.mount: Deactivated successfully.
Oct 10 10:11:19 compute-0 podman[268536]: 2025-10-10 10:11:19.658136874 +0000 UTC m=+0.190108374 container remove 38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:11:19 compute-0 systemd[1]: libpod-conmon-38daaac65c9cb313cf7c7d83bc5725e5b4fea3a2a703b69412eb568f640e7a48.scope: Deactivated successfully.
Oct 10 10:11:19 compute-0 podman[268554]: 2025-10-10 10:11:19.677434279 +0000 UTC m=+0.119349345 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.711 2 WARNING oslo_policy.policy [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.712 2 WARNING oslo_policy.policy [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 10 10:11:19 compute-0 nova_compute[261329]: 2025-10-10 10:11:19.715 2 DEBUG nova.policy [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:11:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:19 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:19 compute-0 podman[268640]: 2025-10-10 10:11:19.826335573 +0000 UTC m=+0.044726768 container create b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:19 compute-0 systemd[1]: Started libpod-conmon-b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a.scope.
Oct 10 10:11:19 compute-0 podman[268640]: 2025-10-10 10:11:19.807055098 +0000 UTC m=+0.025446323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c16fd7c293750f5b6c00ef14afebe249ab0c43a035798ea8e1d6383594ed4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c16fd7c293750f5b6c00ef14afebe249ab0c43a035798ea8e1d6383594ed4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c16fd7c293750f5b6c00ef14afebe249ab0c43a035798ea8e1d6383594ed4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c16fd7c293750f5b6c00ef14afebe249ab0c43a035798ea8e1d6383594ed4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:19 compute-0 podman[268640]: 2025-10-10 10:11:19.922073247 +0000 UTC m=+0.140464462 container init b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:11:19 compute-0 podman[268640]: 2025-10-10 10:11:19.930377455 +0000 UTC m=+0.148768650 container start b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:11:19 compute-0 podman[268640]: 2025-10-10 10:11:19.934094369 +0000 UTC m=+0.152485584 container attach b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:11:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:20 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]: {
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:     "0": [
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:         {
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "devices": [
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "/dev/loop3"
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             ],
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "lv_name": "ceph_lv0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "lv_size": "21470642176",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "name": "ceph_lv0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "tags": {
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.cluster_name": "ceph",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.crush_device_class": "",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.encrypted": "0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.osd_id": "0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.type": "block",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.vdo": "0",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:                 "ceph.with_tpm": "0"
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             },
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "type": "block",
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:             "vg_name": "ceph_vg0"
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:         }
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]:     ]
Oct 10 10:11:20 compute-0 condescending_chatterjee[268656]: }
Oct 10 10:11:20 compute-0 systemd[1]: libpod-b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a.scope: Deactivated successfully.
Oct 10 10:11:20 compute-0 ceph-mon[73551]: pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 10 10:11:20 compute-0 podman[268640]: 2025-10-10 10:11:20.251648378 +0000 UTC m=+0.470039593 container died b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c16fd7c293750f5b6c00ef14afebe249ab0c43a035798ea8e1d6383594ed4b-merged.mount: Deactivated successfully.
Oct 10 10:11:20 compute-0 podman[268640]: 2025-10-10 10:11:20.295786575 +0000 UTC m=+0.514177770 container remove b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.298 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:20 compute-0 systemd[1]: libpod-conmon-b63282c835d8ad532658f4aac5b668192e5b4c880eb028efb056ec18569f4c6a.scope: Deactivated successfully.
Oct 10 10:11:20 compute-0 sudo[268469]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.359 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.361 2 DEBUG nova.virt.images [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] 5ae78700-970d-45b4-a57d-978a054c7519 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.363 2 DEBUG nova.privsep.utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.363 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:20 compute-0 sudo[268680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:11:20 compute-0 sudo[268680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:20 compute-0 sudo[268680]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:20 compute-0 sudo[268716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:11:20 compute-0 sudo[268716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.521 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.527 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.585 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.589 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.617 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.622 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 10 10:11:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:20 compute-0 podman[268820]: 2025-10-10 10:11:20.91322343 +0000 UTC m=+0.041430679 container create e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:11:20 compute-0 nova_compute[261329]: 2025-10-10 10:11:20.931 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:20 compute-0 systemd[1]: Started libpod-conmon-e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f.scope.
Oct 10 10:11:20 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:20 compute-0 podman[268820]: 2025-10-10 10:11:20.89712357 +0000 UTC m=+0.025330839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:20 compute-0 podman[268820]: 2025-10-10 10:11:20.99899946 +0000 UTC m=+0.127206729 container init e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:21 compute-0 podman[268820]: 2025-10-10 10:11:21.00556347 +0000 UTC m=+0.133770719 container start e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:11:21 compute-0 podman[268820]: 2025-10-10 10:11:21.009020405 +0000 UTC m=+0.137227674 container attach e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:11:21 compute-0 nifty_curie[268851]: 167 167
Oct 10 10:11:21 compute-0 systemd[1]: libpod-e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f.scope: Deactivated successfully.
Oct 10 10:11:21 compute-0 conmon[268851]: conmon e2d3ff7a2f4851180193 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f.scope/container/memory.events
Oct 10 10:11:21 compute-0 podman[268820]: 2025-10-10 10:11:21.012919036 +0000 UTC m=+0.141126285 container died e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.012 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf3af773ca6a33063a599d4c4b167e3df0a4b6f976545732b81ebc0e062d816-merged.mount: Deactivated successfully.
Oct 10 10:11:21 compute-0 podman[268820]: 2025-10-10 10:11:21.052215341 +0000 UTC m=+0.180422590 container remove e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_curie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 10:11:21 compute-0 systemd[1]: libpod-conmon-e2d3ff7a2f4851180193378503fb38e856aadbac4feb9a193ff28d1f8c58997f.scope: Deactivated successfully.
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.152 2 DEBUG nova.objects.instance [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.169 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.169 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Ensure instance console log exists: /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.170 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.171 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:21 compute-0 nova_compute[261329]: 2025-10-10 10:11:21.171 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:21 compute-0 podman[268932]: 2025-10-10 10:11:21.260347458 +0000 UTC m=+0.064739298 container create bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:11:21 compute-0 ceph-mon[73551]: pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 10 10:11:21 compute-0 systemd[1]: Started libpod-conmon-bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e.scope.
Oct 10 10:11:21 compute-0 podman[268932]: 2025-10-10 10:11:21.238161265 +0000 UTC m=+0.042553125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65d3459d50b45e4b2b1b5cfd931815b6bec48499fb939783719ba970073ddd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65d3459d50b45e4b2b1b5cfd931815b6bec48499fb939783719ba970073ddd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65d3459d50b45e4b2b1b5cfd931815b6bec48499fb939783719ba970073ddd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65d3459d50b45e4b2b1b5cfd931815b6bec48499fb939783719ba970073ddd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:21 compute-0 podman[268932]: 2025-10-10 10:11:21.368562139 +0000 UTC m=+0.172954079 container init bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:11:21 compute-0 podman[268932]: 2025-10-10 10:11:21.377134486 +0000 UTC m=+0.181526326 container start bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 10:11:21 compute-0 podman[268932]: 2025-10-10 10:11:21.382567388 +0000 UTC m=+0.186959438 container attach bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:11:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27900044f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:21 compute-0 kernel: ganesha.nfsd[268661]: segfault at 50 ip 00007f284d9c932e sp 00007f28057f9210 error 4 in libntirpc.so.5.8[7f284d9ae000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 10 10:11:21 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:11:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[265413]: 10/10/2025 10:11:21 : epoch 68e8db3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 48 proxy ignored for local
Oct 10 10:11:21 compute-0 systemd[1]: Started Process Core Dump (PID 268982/UID 0).
Oct 10 10:11:22 compute-0 lvm[269024]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:11:22 compute-0 lvm[269024]: VG ceph_vg0 finished
Oct 10 10:11:22 compute-0 nifty_nightingale[268948]: {}
Oct 10 10:11:22 compute-0 systemd[1]: libpod-bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e.scope: Deactivated successfully.
Oct 10 10:11:22 compute-0 systemd[1]: libpod-bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e.scope: Consumed 1.199s CPU time.
Oct 10 10:11:22 compute-0 podman[268932]: 2025-10-10 10:11:22.134918828 +0000 UTC m=+0.939310668 container died bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c65d3459d50b45e4b2b1b5cfd931815b6bec48499fb939783719ba970073ddd-merged.mount: Deactivated successfully.
Oct 10 10:11:22 compute-0 podman[268932]: 2025-10-10 10:11:22.185089697 +0000 UTC m=+0.989481537 container remove bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:11:22 compute-0 systemd[1]: libpod-conmon-bcd50a41d301ade2345675fab2b2398346b966b8157b5e3b0da35f506e3bcf0e.scope: Deactivated successfully.
Oct 10 10:11:22 compute-0 sudo[268716]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:11:22 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:11:22 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:22 compute-0 sudo[269041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:11:22 compute-0 sudo[269041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:22 compute-0 sudo[269041]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:22.751 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:11:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:22.752 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:11:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:22.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:11:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:11:23 compute-0 systemd-coredump[268988]: Process 265425 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007f284d9c932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:11:23 compute-0 systemd[1]: systemd-coredump@10-268982-0.service: Deactivated successfully.
Oct 10 10:11:23 compute-0 systemd[1]: systemd-coredump@10-268982-0.service: Consumed 1.326s CPU time.
Oct 10 10:11:23 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:23 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:23 compute-0 podman[269071]: 2025-10-10 10:11:23.279069091 +0000 UTC m=+0.051976441 container died 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-26212382626c7e0f78bd16ad4513c5f571e2e90092addaed371a48345f7509f5-merged.mount: Deactivated successfully.
Oct 10 10:11:23 compute-0 podman[269071]: 2025-10-10 10:11:23.312665776 +0000 UTC m=+0.085573076 container remove 5dc3bb311f3ea6c172f4f3ce8e3f7bfdaa0bf7341210297a4134006713280b3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:11:23 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:11:23 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:11:23 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.872s CPU time.
Oct 10 10:11:23 compute-0 nova_compute[261329]: 2025-10-10 10:11:23.473 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Successfully created port: 6057c377-c50c-4206-b7f3-690fddb6db9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:11:23 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:23.755 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:24 compute-0 ceph-mon[73551]: pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.728 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Successfully updated port: 6057c377-c50c-4206-b7f3-690fddb6db9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.749 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.749 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.749 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:11:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.874 2 DEBUG nova.compute.manager [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-changed-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.875 2 DEBUG nova.compute.manager [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Refreshing instance network info cache due to event network-changed-6057c377-c50c-4206-b7f3-690fddb6db9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.875 2 DEBUG oslo_concurrency.lockutils [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:11:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:24 compute-0 nova_compute[261329]: 2025-10-10 10:11:24.981 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:11:25 compute-0 ceph-mon[73551]: pgmap v764: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:11:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:11:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.722 2 DEBUG nova.network.neutron [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updating instance_info_cache with network_info: [{"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.757 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.757 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Instance network_info: |[{"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.758 2 DEBUG oslo_concurrency.lockutils [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.758 2 DEBUG nova.network.neutron [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Refreshing network info cache for port 6057c377-c50c-4206-b7f3-690fddb6db9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.762 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Start _get_guest_xml network_info=[{"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_options': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.767 2 WARNING nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.771 2 DEBUG nova.virt.libvirt.host [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.772 2 DEBUG nova.virt.libvirt.host [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.774 2 DEBUG nova.virt.libvirt.host [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.774 2 DEBUG nova.virt.libvirt.host [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.775 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.775 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.775 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.775 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.776 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.776 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.776 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.776 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.776 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.777 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.777 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.777 2 DEBUG nova.virt.hardware [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.780 2 DEBUG nova.privsep.utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 10 10:11:26 compute-0 nova_compute[261329]: 2025-10-10 10:11:26.781 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:27.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:11:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:11:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305200522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.237 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.262 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.266 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:27] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 10 10:11:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:27] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 10 10:11:27 compute-0 ceph-mon[73551]: pgmap v765: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3305200522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101127 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:11:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:11:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006987527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.705 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.706 2 DEBUG nova.virt.libvirt.vif [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:11:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-558029342',display_name='tempest-TestNetworkBasicOps-server-558029342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-558029342',id=2,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG1ya1eiRQKbbkyVKT5z/8MpCZ88zDAb6Vo+rf+NsvCQSRJggsI+FodbcOeRIRwi8xdePd3p3I9XBCPTVOuZwkY10EEaSUvv7qGSmBYABCBnVC7fXPVGxJwgLZnLeevRcw==',key_name='tempest-TestNetworkBasicOps-46626227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-6qqxeoq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:11:18Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.707 2 DEBUG nova.network.os_vif_util [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.708 2 DEBUG nova.network.os_vif_util [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.710 2 DEBUG nova.objects.instance [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.727 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <uuid>26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad</uuid>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <name>instance-00000002</name>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <memory>131072</memory>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <vcpu>1</vcpu>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <metadata>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:name>tempest-TestNetworkBasicOps-server-558029342</nova:name>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:creationTime>2025-10-10 10:11:26</nova:creationTime>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:flavor name="m1.nano">
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:memory>128</nova:memory>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:disk>1</nova:disk>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:swap>0</nova:swap>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </nova:flavor>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:owner>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </nova:owner>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <nova:ports>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <nova:port uuid="6057c377-c50c-4206-b7f3-690fddb6db9b">
Oct 10 10:11:27 compute-0 nova_compute[261329]:           <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         </nova:port>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </nova:ports>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </nova:instance>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </metadata>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <sysinfo type="smbios">
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <system>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="serial">26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="uuid">26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </system>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </sysinfo>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <os>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <boot dev="hd"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <smbios mode="sysinfo"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </os>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <features>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <acpi/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <apic/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <vmcoreinfo/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </features>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <clock offset="utc">
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <timer name="hpet" present="no"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </clock>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <cpu mode="host-model" match="exact">
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <disk type="network" device="disk">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk">
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </source>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <target dev="vda" bus="virtio"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <disk type="network" device="cdrom">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config">
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </source>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:11:27 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <target dev="sda" bus="sata"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <interface type="ethernet">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <mac address="fa:16:3e:8b:37:b6"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <mtu size="1442"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <target dev="tap6057c377-c5"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <serial type="pty">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <log file="/var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/console.log" append="off"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </serial>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <video>
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </video>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <input type="tablet" bus="usb"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <rng model="virtio">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <controller type="usb" index="0"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     <memballoon model="virtio">
Oct 10 10:11:27 compute-0 nova_compute[261329]:       <stats period="10"/>
Oct 10 10:11:27 compute-0 nova_compute[261329]:     </memballoon>
Oct 10 10:11:27 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:11:27 compute-0 nova_compute[261329]: </domain>
Oct 10 10:11:27 compute-0 nova_compute[261329]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.729 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Preparing to wait for external event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.729 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.729 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.729 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.730 2 DEBUG nova.virt.libvirt.vif [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:11:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-558029342',display_name='tempest-TestNetworkBasicOps-server-558029342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-558029342',id=2,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG1ya1eiRQKbbkyVKT5z/8MpCZ88zDAb6Vo+rf+NsvCQSRJggsI+FodbcOeRIRwi8xdePd3p3I9XBCPTVOuZwkY10EEaSUvv7qGSmBYABCBnVC7fXPVGxJwgLZnLeevRcw==',key_name='tempest-TestNetworkBasicOps-46626227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-6qqxeoq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:11:18Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.730 2 DEBUG nova.network.os_vif_util [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.731 2 DEBUG nova.network.os_vif_util [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.731 2 DEBUG os_vif [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.770 2 DEBUG ovsdbapp.backend.ovs_idl [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.771 2 DEBUG ovsdbapp.backend.ovs_idl [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.771 2 DEBUG ovsdbapp.backend.ovs_idl [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:11:27 compute-0 nova_compute[261329]: 2025-10-10 10:11:27.791 2 INFO oslo.privsep.daemon [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp6544166s/privsep.sock']
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.114 2 DEBUG nova.network.neutron [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updated VIF entry in instance network info cache for port 6057c377-c50c-4206-b7f3-690fddb6db9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.115 2 DEBUG nova.network.neutron [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updating instance_info_cache with network_info: [{"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.141 2 DEBUG oslo_concurrency.lockutils [req-ff91ad2d-9b87-4f1b-b817-0003bcf3401e req-6d052c93-3e64-4888-a855-b6f44f05cdd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.454 2 INFO oslo.privsep.daemon [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.327 567 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.332 567 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.335 567 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.335 567 INFO oslo.privsep.daemon [-] privsep daemon running as pid 567
Oct 10 10:11:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3006987527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6057c377-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6057c377-c5, col_values=(('external_ids', {'iface-id': '6057c377-c50c-4206-b7f3-690fddb6db9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:37:b6', 'vm-uuid': '26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:28 compute-0 NetworkManager[44849]: <info>  [1760091088.7858] manager: (tap6057c377-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.793 2 INFO os_vif [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5')
Oct 10 10:11:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:28.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.884 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.884 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.885 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:8b:37:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.885 2 INFO nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Using config drive
Oct 10 10:11:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:28 compute-0 nova_compute[261329]: 2025-10-10 10:11:28.914 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:29 compute-0 ceph-mon[73551]: pgmap v766: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.050 2 INFO nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Creating config drive at /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.061 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1q9ecivw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.211 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1q9ecivw" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.247 2 DEBUG nova.storage.rbd_utils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.251 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.441 2 DEBUG oslo_concurrency.processutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.442 2 INFO nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Deleting local config drive /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad/disk.config because it was imported into RBD.
Oct 10 10:11:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:11:30 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 10 10:11:30 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 10 10:11:30 compute-0 kernel: tap6057c377-c5: entered promiscuous mode
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:30 compute-0 ovn_controller[153080]: 2025-10-10T10:11:30Z|00027|binding|INFO|Claiming lport 6057c377-c50c-4206-b7f3-690fddb6db9b for this chassis.
Oct 10 10:11:30 compute-0 ovn_controller[153080]: 2025-10-10T10:11:30Z|00028|binding|INFO|6057c377-c50c-4206-b7f3-690fddb6db9b: Claiming fa:16:3e:8b:37:b6 10.100.0.29
Oct 10 10:11:30 compute-0 NetworkManager[44849]: <info>  [1760091090.5687] manager: (tap6057c377-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:30 compute-0 systemd-udevd[269283]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:30 compute-0 ovn_controller[153080]: 2025-10-10T10:11:30Z|00029|binding|INFO|Setting lport 6057c377-c50c-4206-b7f3-690fddb6db9b ovn-installed in OVS
Oct 10 10:11:30 compute-0 nova_compute[261329]: 2025-10-10 10:11:30.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:30 compute-0 NetworkManager[44849]: <info>  [1760091090.6367] device (tap6057c377-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:11:30 compute-0 NetworkManager[44849]: <info>  [1760091090.6375] device (tap6057c377-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:11:30 compute-0 ovn_controller[153080]: 2025-10-10T10:11:30Z|00030|binding|INFO|Setting lport 6057c377-c50c-4206-b7f3-690fddb6db9b up in Southbound
Oct 10 10:11:30 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:30.773 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:37:b6 10.100.0.29'], port_security=['fa:16:3e:8b:37:b6 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb7d0c80-75fe-4fd5-9387-a8bb1b1f1f40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=606a4926-1b61-4653-9923-682ecd4c14ec, chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=6057c377-c50c-4206-b7f3-690fddb6db9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:11:30 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:30.775 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 6057c377-c50c-4206-b7f3-690fddb6db9b in datapath afe228b6-c4cc-44fe-ae61-2e9d1b058339 bound to our chassis
Oct 10 10:11:30 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:30.778 162925 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network afe228b6-c4cc-44fe-ae61-2e9d1b058339
Oct 10 10:11:30 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:30.780 162925 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpvu5enrj4/privsep.sock']
Oct 10 10:11:30 compute-0 systemd-machined[215425]: New machine qemu-1-instance-00000002.
Oct 10 10:11:30 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 10 10:11:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:30.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:11:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.570 162925 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.571 162925 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvu5enrj4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.399 269344 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.403 269344 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.405 269344 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.406 269344 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269344
Oct 10 10:11:31 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:31.573 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe3b92d-67b8-4fd5-b08a-9d348e740a9b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:31 compute-0 nova_compute[261329]: 2025-10-10 10:11:31.750 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091091.7500634, 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:11:31 compute-0 nova_compute[261329]: 2025-10-10 10:11:31.751 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] VM Started (Lifecycle Event)
Oct 10 10:11:31 compute-0 ceph-mon[73551]: pgmap v767: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.102 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.105 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091091.750244, 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.105 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] VM Paused (Lifecycle Event)
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.124 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.127 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.179 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:11:32 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:32.395 269344 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:32 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:32.395 269344 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:32 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:32.395 269344 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:32 compute-0 nova_compute[261329]: 2025-10-10 10:11:32.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 10 10:11:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.102 2 DEBUG nova.compute.manager [req-85a7dd86-d10d-453f-a16a-ba2273c2537c req-96060b02-0d29-4449-8863-d488cabc5d46 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.103 2 DEBUG oslo_concurrency.lockutils [req-85a7dd86-d10d-453f-a16a-ba2273c2537c req-96060b02-0d29-4449-8863-d488cabc5d46 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.104 2 DEBUG oslo_concurrency.lockutils [req-85a7dd86-d10d-453f-a16a-ba2273c2537c req-96060b02-0d29-4449-8863-d488cabc5d46 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.104 2 DEBUG oslo_concurrency.lockutils [req-85a7dd86-d10d-453f-a16a-ba2273c2537c req-96060b02-0d29-4449-8863-d488cabc5d46 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.104 2 DEBUG nova.compute.manager [req-85a7dd86-d10d-453f-a16a-ba2273c2537c req-96060b02-0d29-4449-8863-d488cabc5d46 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Processing event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.105 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.109 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091093.1092358, 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.110 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] VM Resumed (Lifecycle Event)
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.115 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.119 2 INFO nova.virt.libvirt.driver [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Instance spawned successfully.
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.121 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.154 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.160 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.163 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.164 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.164 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.165 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.165 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.166 2 DEBUG nova.virt.libvirt.driver [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.187 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.213 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9b4c63-07a0-4cd6-ade5-78a9fb7d4e4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.215 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapafe228b6-c1 in ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.217 269344 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapafe228b6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.217 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[08a7c509-899b-4816-8dda-6884eb3e7e6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.221 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[46fae65f-acf0-4946-b2b9-32c1bb89bc42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.254 2 INFO nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Took 14.31 seconds to spawn the instance on the hypervisor.
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.256 2 DEBUG nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.259 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[4985739c-1cfb-47d3-9eae-72e656ba610e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.292 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0fc6de-7fcd-4555-a646-ebfafa115b65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.294 162925 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmprjt7xume/privsep.sock']
Oct 10 10:11:33 compute-0 podman[269355]: 2025-10-10 10:11:33.333681791 +0000 UTC m=+0.045730642 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.360 2 INFO nova.compute.manager [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Took 15.32 seconds to build instance.
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.395 2 DEBUG oslo_concurrency.lockutils [None req-79ac54e6-85f1-47d1-b6b3-807757521041 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:33 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 11.
Oct 10 10:11:33 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:11:33 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.872s CPU time.
Oct 10 10:11:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:11:33 compute-0 nova_compute[261329]: 2025-10-10 10:11:33.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:33 compute-0 podman[269426]: 2025-10-10 10:11:33.905762898 +0000 UTC m=+0.049790847 container create a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93478cea2b1c03d94252e28789b4c615eda6c170f3f7aed45cfa2d985be11b79/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93478cea2b1c03d94252e28789b4c615eda6c170f3f7aed45cfa2d985be11b79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93478cea2b1c03d94252e28789b4c615eda6c170f3f7aed45cfa2d985be11b79/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93478cea2b1c03d94252e28789b4c615eda6c170f3f7aed45cfa2d985be11b79/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ruydzo-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.971 162925 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.972 162925 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprjt7xume/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.830 269423 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.834 269423 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.836 269423 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.836 269423 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269423
Oct 10 10:11:33 compute-0 podman[269426]: 2025-10-10 10:11:33.88702725 +0000 UTC m=+0.031055229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:11:33 compute-0 ceph-mon[73551]: pgmap v768: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 10 10:11:33 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:33.975 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb66dfe-55d6-42eb-b519-80a26b951949]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:33 compute-0 podman[269426]: 2025-10-10 10:11:33.993692731 +0000 UTC m=+0.137720700 container init a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:11:34 compute-0 podman[269426]: 2025-10-10 10:11:34.000304872 +0000 UTC m=+0.144332831 container start a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:11:34 compute-0 bash[269426]: a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5
Oct 10 10:11:34 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:11:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:11:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:34 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:34.493 269423 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:34 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:34.493 269423 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:34 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:34.493 269423 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.098 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[204a7087-5768-4506-937a-bf5c458056be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 NetworkManager[44849]: <info>  [1760091095.1092] manager: (tapafe228b6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.111 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[319b07e1-964e-4e9f-b319-bc297bde4605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 systemd-udevd[269495]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.149 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[821fd743-252a-44db-acd0-6cb1a98ae2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.156 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[5491dd73-0a84-4a42-a231-c73624daddcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 NetworkManager[44849]: <info>  [1760091095.1942] device (tapafe228b6-c0): carrier: link connected
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.201 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[260ca409-8d07-459d-938b-ae9225214cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.225 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[5eed3262-68d5-4ce4-bfb3-d92686a5c03e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafe228b6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:ae:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399875, 'reachable_time': 38439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269514, 'error': None, 'target': 'ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.248 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[6b702fd5-e051-4656-8cd5-6dced40001fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:ae47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 399875, 'tstamp': 399875}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269516, 'error': None, 'target': 'ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.268 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[0df664c5-3ee7-44af-aad9-d268c382396b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapafe228b6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:ae:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399875, 'reachable_time': 38439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269517, 'error': None, 'target': 'ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.317 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c78c66-6514-4ea8-a15f-afae94d06399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.397 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[17b6b2c4-5f4c-4770-9479-88522bec8d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.400 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafe228b6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.400 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.401 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafe228b6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:35 compute-0 NetworkManager[44849]: <info>  [1760091095.4041] manager: (tapafe228b6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 10 10:11:35 compute-0 kernel: tapafe228b6-c0: entered promiscuous mode
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.408 2 DEBUG nova.compute.manager [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.408 2 DEBUG oslo_concurrency.lockutils [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.408 2 DEBUG oslo_concurrency.lockutils [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.409 2 DEBUG oslo_concurrency.lockutils [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.409 2 DEBUG nova.compute.manager [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] No waiting events found dispatching network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.409 2 WARNING nova.compute.manager [req-dbe6664b-24e8-4cc9-958e-c1c09c3cd7d4 req-40835f99-a1ba-4eca-87d1-6bba9b85984b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received unexpected event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b for instance with vm_state active and task_state None.
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.415 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapafe228b6-c0, col_values=(('external_ids', {'iface-id': 'd987dc98-eb24-46a8-a68d-3f08d1bf213d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:35 compute-0 ovn_controller[153080]: 2025-10-10T10:11:35Z|00031|binding|INFO|Releasing lport d987dc98-eb24-46a8-a68d-3f08d1bf213d from this chassis (sb_readonly=0)
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:35 compute-0 nova_compute[261329]: 2025-10-10 10:11:35.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.434 162925 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/afe228b6-c4cc-44fe-ae61-2e9d1b058339.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/afe228b6-c4cc-44fe-ae61-2e9d1b058339.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.435 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[3ead6b2a-f2cc-4fdc-9ec7-f7d3e971a800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.437 162925 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: global
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     log         /dev/log local0 debug
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     log-tag     haproxy-metadata-proxy-afe228b6-c4cc-44fe-ae61-2e9d1b058339
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     user        root
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     group       root
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     maxconn     1024
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     pidfile     /var/lib/neutron/external/pids/afe228b6-c4cc-44fe-ae61-2e9d1b058339.pid.haproxy
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     daemon
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: defaults
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     log global
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     mode http
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     option httplog
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     option dontlognull
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     option http-server-close
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     option forwardfor
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     retries                 3
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     timeout http-request    30s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     timeout connect         30s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     timeout client          32s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     timeout server          32s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     timeout http-keep-alive 30s
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: listen listener
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     bind 169.254.169.254:80
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:     http-request add-header X-OVN-Network-ID afe228b6-c4cc-44fe-ae61-2e9d1b058339
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:11:35 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:35.438 162925 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'env', 'PROCESS_TAG=haproxy-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/afe228b6-c4cc-44fe-ae61-2e9d1b058339.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:11:35 compute-0 podman[269550]: 2025-10-10 10:11:35.859217427 +0000 UTC m=+0.057007809 container create 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:11:35 compute-0 systemd[1]: Started libpod-conmon-1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820.scope.
Oct 10 10:11:35 compute-0 podman[269550]: 2025-10-10 10:11:35.831090796 +0000 UTC m=+0.028881238 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:11:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5f53675f3933398c1915068aa8777a6db5aaa790c23f712e86256147ff2cb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:11:35 compute-0 podman[269550]: 2025-10-10 10:11:35.954707393 +0000 UTC m=+0.152497795 container init 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 10:11:35 compute-0 podman[269550]: 2025-10-10 10:11:35.95970234 +0000 UTC m=+0.157492712 container start 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 10:11:35 compute-0 ceph-mon[73551]: pgmap v769: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:35 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [NOTICE]   (269569) : New worker (269571) forked
Oct 10 10:11:35 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [NOTICE]   (269569) : Loading success.
Oct 10 10:11:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:37.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:11:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:37] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Oct 10 10:11:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:37] "GET /metrics HTTP/1.1" 200 48388 "" "Prometheus/2.51.0"
Oct 10 10:11:37 compute-0 nova_compute[261329]: 2025-10-10 10:11:37.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:37 compute-0 sudo[269582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:37 compute-0 sudo[269582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:37 compute-0 sudo[269582]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:38 compute-0 ceph-mon[73551]: pgmap v770: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:38 compute-0 nova_compute[261329]: 2025-10-10 10:11:38.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000034s ======
Oct 10 10:11:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Oct 10 10:11:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:38.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:40 compute-0 ceph-mon[73551]: pgmap v771: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:40 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:11:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:40 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:11:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:40.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9975] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9980] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9986] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9989] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 10:11:40 compute-0 nova_compute[261329]: 2025-10-10 10:11:40.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9994] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 10 10:11:40 compute-0 NetworkManager[44849]: <info>  [1760091100.9998] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 10 10:11:41 compute-0 NetworkManager[44849]: <info>  [1760091101.0000] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 10:11:41 compute-0 NetworkManager[44849]: <info>  [1760091101.0001] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 10:11:41 compute-0 nova_compute[261329]: 2025-10-10 10:11:41.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:41 compute-0 ovn_controller[153080]: 2025-10-10T10:11:41Z|00032|binding|INFO|Releasing lport d987dc98-eb24-46a8-a68d-3f08d1bf213d from this chassis (sb_readonly=0)
Oct 10 10:11:41 compute-0 nova_compute[261329]: 2025-10-10 10:11:41.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:41.898 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:41.899 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:41.899 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:42 compute-0 ceph-mon[73551]: pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:42 compute-0 nova_compute[261329]: 2025-10-10 10:11:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 77 op/s
Oct 10 10:11:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:42.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:11:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:42.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:11:43 compute-0 nova_compute[261329]: 2025-10-10 10:11:43.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:44 compute-0 ceph-mon[73551]: pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 77 op/s
Oct 10 10:11:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:45 compute-0 ovn_controller[153080]: 2025-10-10T10:11:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:37:b6 10.100.0.29
Oct 10 10:11:45 compute-0 ovn_controller[153080]: 2025-10-10T10:11:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:37:b6 10.100.0.29
Oct 10 10:11:46 compute-0 ceph-mon[73551]: pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:11:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:11:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:11:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:11:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:47.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:11:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:11:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:11:47 compute-0 nova_compute[261329]: 2025-10-10 10:11:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:48 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:48 compute-0 ceph-mon[73551]: pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:48 compute-0 nova_compute[261329]: 2025-10-10 10:11:48.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:48 compute-0 nova_compute[261329]: 2025-10-10 10:11:48.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 10 10:11:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:48.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:11:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101149 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:11:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:49 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.715 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.716 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquired lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.716 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:11:49 compute-0 nova_compute[261329]: 2025-10-10 10:11:49.717 2 DEBUG nova.objects.instance [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:11:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:49 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:50 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:50 compute-0 ceph-mon[73551]: pgmap v776: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 10 10:11:50 compute-0 podman[269638]: 2025-10-10 10:11:50.242704329 +0000 UTC m=+0.078438471 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 10 10:11:50 compute-0 podman[269637]: 2025-10-10 10:11:50.254826907 +0000 UTC m=+0.099550137 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:11:50 compute-0 podman[269639]: 2025-10-10 10:11:50.289522176 +0000 UTC m=+0.124006327 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.766 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updating instance_info_cache with network_info: [{"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.794 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Releasing lock "refresh_cache-26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.795 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.796 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.796 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.797 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.797 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.798 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.799 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.821 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.821 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.822 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.822 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:11:50 compute-0 nova_compute[261329]: 2025-10-10 10:11:50.823 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:50.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1869105366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.276 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.357 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.358 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.558 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.559 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4427MB free_disk=59.89728546142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.560 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.560 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:51 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.659 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Instance 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.659 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.660 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:11:51 compute-0 nova_compute[261329]: 2025-10-10 10:11:51.697 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:51 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:52 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:52 compute-0 ceph-mon[73551]: pgmap v777: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1869105366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716989008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.162 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.169 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.208 2 ERROR nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [req-127771a5-b8af-4806-8208-380e94bb4f2e] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 5b1ab6df-62aa-4a93-8e24-04440191f108.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-127771a5-b8af-4806-8208-380e94bb4f2e"}]}
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.246 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing inventories for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.267 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating ProviderTree inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.268 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.285 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing aggregate associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.343 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing trait associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.405 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689505701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.839 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.844 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:11:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.882 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updated inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.883 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.883 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:11:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:11:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:52.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.908 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:11:52 compute-0 nova_compute[261329]: 2025-10-10 10:11:52.908 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:11:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:11:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1716989008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3689505701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:53 compute-0 nova_compute[261329]: 2025-10-10 10:11:53.349 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-0 nova_compute[261329]: 2025-10-10 10:11:53.351 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-0 nova_compute[261329]: 2025-10-10 10:11:53.377 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:53 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad000029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:53 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:53 compute-0 nova_compute[261329]: 2025-10-10 10:11:53.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.008 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.009 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.010 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.011 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.011 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.013 2 INFO nova.compute.manager [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Terminating instance
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.015 2 DEBUG nova.compute.manager [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:11:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:54 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:54 compute-0 kernel: tap6057c377-c5 (unregistering): left promiscuous mode
Oct 10 10:11:54 compute-0 NetworkManager[44849]: <info>  [1760091114.0694] device (tap6057c377-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:11:54 compute-0 ovn_controller[153080]: 2025-10-10T10:11:54Z|00033|binding|INFO|Releasing lport 6057c377-c50c-4206-b7f3-690fddb6db9b from this chassis (sb_readonly=0)
Oct 10 10:11:54 compute-0 ovn_controller[153080]: 2025-10-10T10:11:54Z|00034|binding|INFO|Setting lport 6057c377-c50c-4206-b7f3-690fddb6db9b down in Southbound
Oct 10 10:11:54 compute-0 ovn_controller[153080]: 2025-10-10T10:11:54Z|00035|binding|INFO|Removing iface tap6057c377-c5 ovn-installed in OVS
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.095 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:37:b6 10.100.0.29'], port_security=['fa:16:3e:8b:37:b6 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb7d0c80-75fe-4fd5-9387-a8bb1b1f1f40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=606a4926-1b61-4653-9923-682ecd4c14ec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=6057c377-c50c-4206-b7f3-690fddb6db9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.098 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 6057c377-c50c-4206-b7f3-690fddb6db9b in datapath afe228b6-c4cc-44fe-ae61-2e9d1b058339 unbound from our chassis
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.100 162925 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network afe228b6-c4cc-44fe-ae61-2e9d1b058339, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.102 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[4da73a2c-5c98-4ec0-971c-9e4f380de915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.103 162925 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339 namespace which is not needed anymore
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 10 10:11:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 13.406s CPU time.
Oct 10 10:11:54 compute-0 systemd-machined[215425]: Machine qemu-1-instance-00000002 terminated.
Oct 10 10:11:54 compute-0 ceph-mon[73551]: pgmap v778: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3217553215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/846796813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3655824811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:54 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [NOTICE]   (269569) : haproxy version is 2.8.14-c23fe91
Oct 10 10:11:54 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [NOTICE]   (269569) : path to executable is /usr/sbin/haproxy
Oct 10 10:11:54 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [WARNING]  (269569) : Exiting Master process...
Oct 10 10:11:54 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [ALERT]    (269569) : Current worker (269571) exited with code 143 (Terminated)
Oct 10 10:11:54 compute-0 neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339[269565]: [WARNING]  (269569) : All workers exited. Exiting... (0)
Oct 10 10:11:54 compute-0 systemd[1]: libpod-1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820.scope: Deactivated successfully.
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.246 2 INFO nova.virt.libvirt.driver [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Instance destroyed successfully.
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.248 2 DEBUG nova.objects.instance [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:11:54 compute-0 podman[269797]: 2025-10-10 10:11:54.251005062 +0000 UTC m=+0.048389438 container died 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.265 2 DEBUG nova.virt.libvirt.vif [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:11:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-558029342',display_name='tempest-TestNetworkBasicOps-server-558029342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-558029342',id=2,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG1ya1eiRQKbbkyVKT5z/8MpCZ88zDAb6Vo+rf+NsvCQSRJggsI+FodbcOeRIRwi8xdePd3p3I9XBCPTVOuZwkY10EEaSUvv7qGSmBYABCBnVC7fXPVGxJwgLZnLeevRcw==',key_name='tempest-TestNetworkBasicOps-46626227',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:11:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-6qqxeoq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:11:33Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.266 2 DEBUG nova.network.os_vif_util [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "6057c377-c50c-4206-b7f3-690fddb6db9b", "address": "fa:16:3e:8b:37:b6", "network": {"id": "afe228b6-c4cc-44fe-ae61-2e9d1b058339", "bridge": "br-int", "label": "tempest-network-smoke--1476966903", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6057c377-c5", "ovs_interfaceid": "6057c377-c50c-4206-b7f3-690fddb6db9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.266 2 DEBUG nova.network.os_vif_util [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.267 2 DEBUG os_vif [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6057c377-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.276 2 INFO os_vif [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:37:b6,bridge_name='br-int',has_traffic_filtering=True,id=6057c377-c50c-4206-b7f3-690fddb6db9b,network=Network(afe228b6-c4cc-44fe-ae61-2e9d1b058339),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6057c377-c5')
Oct 10 10:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820-userdata-shm.mount: Deactivated successfully.
Oct 10 10:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff5f53675f3933398c1915068aa8777a6db5aaa790c23f712e86256147ff2cb1-merged.mount: Deactivated successfully.
Oct 10 10:11:54 compute-0 podman[269797]: 2025-10-10 10:11:54.295908378 +0000 UTC m=+0.093292734 container cleanup 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:11:54 compute-0 systemd[1]: libpod-conmon-1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820.scope: Deactivated successfully.
Oct 10 10:11:54 compute-0 podman[269855]: 2025-10-10 10:11:54.358853922 +0000 UTC m=+0.040634081 container remove 1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.365 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[b888f069-8794-40d5-ad6d-1b05ca149238]: (4, ('Fri Oct 10 10:11:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339 (1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820)\n1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820\nFri Oct 10 10:11:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339 (1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820)\n1b007c577060084491f079acab0a6126026e1a1ccbea1d0d70ee6c2f509f3820\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.367 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[2158c0b0-770d-4524-9b29-ea6cfde8e9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.369 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafe228b6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 kernel: tapafe228b6-c0: left promiscuous mode
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.389 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[00f69c51-3a9f-4f83-baa9-d0ddcb41a43e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.433 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[ce69a954-1e4c-48bf-a498-b33b3dffc519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.435 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcf243f-152c-4bc1-97ca-3b9288e6daf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.453 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[1012d2e3-7a01-4fe0-8a2d-7c4e549e47b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399866, 'reachable_time': 24222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269870, 'error': None, 'target': 'ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.465 163038 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-afe228b6-c4cc-44fe-ae61-2e9d1b058339 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:11:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dafe228b6\x2dc4cc\x2d44fe\x2dae61\x2d2e9d1b058339.mount: Deactivated successfully.
Oct 10 10:11:54 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:11:54.465 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[28ef78ab-9bd6-4bc1-8ee6-811e99aea914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.689 2 INFO nova.virt.libvirt.driver [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Deleting instance files /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_del
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.691 2 INFO nova.virt.libvirt.driver [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Deletion of /var/lib/nova/instances/26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad_del complete
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.718 2 DEBUG nova.compute.manager [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-unplugged-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.719 2 DEBUG oslo_concurrency.lockutils [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.719 2 DEBUG oslo_concurrency.lockutils [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.719 2 DEBUG oslo_concurrency.lockutils [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.720 2 DEBUG nova.compute.manager [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] No waiting events found dispatching network-vif-unplugged-6057c377-c50c-4206-b7f3-690fddb6db9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.720 2 DEBUG nova.compute.manager [req-9a57f296-9791-4317-abc5-88f940f41f34 req-f7813326-009b-4952-a33e-68e97eaf4119 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-unplugged-6057c377-c50c-4206-b7f3-690fddb6db9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.759 2 DEBUG nova.virt.libvirt.host [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.760 2 INFO nova.virt.libvirt.host [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] UEFI support detected
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.762 2 INFO nova.compute.manager [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.763 2 DEBUG oslo.service.loopingcall [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.763 2 DEBUG nova.compute.manager [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:11:54 compute-0 nova_compute[261329]: 2025-10-10 10:11:54.763 2 DEBUG nova.network.neutron [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:11:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:11:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:54.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:11:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:11:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:54.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:11:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3776578443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:55 compute-0 nova_compute[261329]: 2025-10-10 10:11:55.572 2 DEBUG nova.network.neutron [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:11:55 compute-0 nova_compute[261329]: 2025-10-10 10:11:55.592 2 INFO nova.compute.manager [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Took 0.83 seconds to deallocate network for instance.
Oct 10 10:11:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:55 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:55 compute-0 nova_compute[261329]: 2025-10-10 10:11:55.652 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:55 compute-0 nova_compute[261329]: 2025-10-10 10:11:55.653 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:55 compute-0 nova_compute[261329]: 2025-10-10 10:11:55.715 2 DEBUG oslo_concurrency.processutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:55 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad000029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:56 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:56 compute-0 ceph-mon[73551]: pgmap v779: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3981716478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.247 2 DEBUG oslo_concurrency.processutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.253 2 DEBUG nova.compute.provider_tree [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.271 2 DEBUG nova.scheduler.client.report [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.294 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.323 2 INFO nova.scheduler.client.report [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.395 2 DEBUG oslo_concurrency.lockutils [None req-92fe3559-ce38-45de-abf8-91ecb1c86708 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.814 2 DEBUG nova.compute.manager [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.815 2 DEBUG oslo_concurrency.lockutils [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.815 2 DEBUG oslo_concurrency.lockutils [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.815 2 DEBUG oslo_concurrency.lockutils [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.816 2 DEBUG nova.compute.manager [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] No waiting events found dispatching network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.816 2 WARNING nova.compute.manager [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received unexpected event network-vif-plugged-6057c377-c50c-4206-b7f3-690fddb6db9b for instance with vm_state deleted and task_state None.
Oct 10 10:11:56 compute-0 nova_compute[261329]: 2025-10-10 10:11:56.816 2 DEBUG nova.compute.manager [req-b0d5e2ab-c499-40a9-947b-c08963b66b5f req-12ac8d37-8c33-44e5-ac10-75a529dc47d6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Received event network-vif-deleted-6057c377-c50c-4206-b7f3-690fddb6db9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:11:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:56.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:56.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:57.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:11:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:11:57.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:11:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3981716478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:11:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:11:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:11:57 compute-0 nova_compute[261329]: 2025-10-10 10:11:57.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:57 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:57 compute-0 sudo[269898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:57 compute-0 sudo[269898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:57 compute-0 sudo[269898]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:57 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:58 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:58 compute-0 ceph-mon[73551]: pgmap v780: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 10 10:11:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:11:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:58.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:59 compute-0 nova_compute[261329]: 2025-10-10 10:11:59.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:59 compute-0 nova_compute[261329]: 2025-10-10 10:11:59.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:59 compute-0 nova_compute[261329]: 2025-10-10 10:11:59.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:59 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8002b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:11:59 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:00 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:00 compute-0 ceph-mon[73551]: pgmap v781: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 10 10:12:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 10 10:12:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:12:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:01 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:01 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8002b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:02 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:02 compute-0 ceph-mon[73551]: pgmap v782: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 10 10:12:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:02 compute-0 nova_compute[261329]: 2025-10-10 10:12:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 10 10:12:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2472464452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:03 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:03 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:04 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8002b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:04 compute-0 podman[269931]: 2025-10-10 10:12:04.242254585 +0000 UTC m=+0.089069451 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 10 10:12:04 compute-0 ceph-mon[73551]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 10 10:12:04 compute-0 nova_compute[261329]: 2025-10-10 10:12:04.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:05 compute-0 ceph-mon[73551]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:05 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:05 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:06 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:07.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:12:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:07] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:12:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:07] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:12:07 compute-0 nova_compute[261329]: 2025-10-10 10:12:07.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:07 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:07 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:07 compute-0 ceph-mon[73551]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:08 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Oct 10 10:12:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:08.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:09 compute-0 nova_compute[261329]: 2025-10-10 10:12:09.244 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091114.2434008, 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:12:09 compute-0 nova_compute[261329]: 2025-10-10 10:12:09.245 2 INFO nova.compute.manager [-] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] VM Stopped (Lifecycle Event)
Oct 10 10:12:09 compute-0 nova_compute[261329]: 2025-10-10 10:12:09.275 2 DEBUG nova.compute.manager [None req-b5ecf79a-a9df-49b3-9220-973eaca9246a - - - - - -] [instance: 26cf929e-4a9a-4f5a-a05a-df1d4fc8aaad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:09 compute-0 nova_compute[261329]: 2025-10-10 10:12:09.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:09 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:09 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:09 compute-0 ceph-mon[73551]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Oct 10 10:12:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:10 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:10.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:11 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:11 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:11 compute-0 ceph-mon[73551]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:12 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:12 compute-0 nova_compute[261329]: 2025-10-10 10:12:12.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:12.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:13 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:13 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:13 compute-0 ceph-mon[73551]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:14 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:14 compute-0 nova_compute[261329]: 2025-10-10 10:12:14.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:15 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:15 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:16 compute-0 ceph-mon[73551]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:16 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:12:16
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'default.rgw.meta', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control']
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:12:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:12:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:12:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:12:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:12:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:17.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:12:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:17] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct 10 10:12:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:17] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct 10 10:12:17 compute-0 nova_compute[261329]: 2025-10-10 10:12:17.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:17 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:17 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:17 compute-0 sudo[269964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:17 compute-0 sudo[269964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:17 compute-0 sudo[269964]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:18 compute-0 ceph-mon[73551]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:18 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:12:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:18.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:19 compute-0 nova_compute[261329]: 2025-10-10 10:12:19.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:19 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:19 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:20 compute-0 ceph-mon[73551]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:12:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:20 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:20.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:21 compute-0 podman[269995]: 2025-10-10 10:12:21.218240003 +0000 UTC m=+0.059807724 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:12:21 compute-0 podman[269994]: 2025-10-10 10:12:21.218336967 +0000 UTC m=+0.066973804 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct 10 10:12:21 compute-0 podman[269996]: 2025-10-10 10:12:21.248285164 +0000 UTC m=+0.087583993 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:12:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:21 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:21 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:22 compute-0 ceph-mon[73551]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:22 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:22 compute-0 nova_compute[261329]: 2025-10-10 10:12:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:22 compute-0 sudo[270058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:12:22 compute-0 sudo[270058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:22 compute-0 sudo[270058]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:22 compute-0 sudo[270083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:12:22 compute-0 sudo[270083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:22.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:22.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:23 compute-0 sudo[270083]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:12:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:12:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:23 compute-0 sudo[270142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:12:23 compute-0 sudo[270142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:23 compute-0 sudo[270142]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:23 compute-0 sudo[270167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:12:23 compute-0 sudo[270167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.045143662 +0000 UTC m=+0.059891828 container create 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:12:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:24 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:24 compute-0 ceph-mon[73551]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:12:24 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:24 compute-0 systemd[1]: Started libpod-conmon-4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057.scope.
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.019492741 +0000 UTC m=+0.034240917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.150281526 +0000 UTC m=+0.165029662 container init 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.160287556 +0000 UTC m=+0.175035712 container start 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.164487189 +0000 UTC m=+0.179235365 container attach 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:12:24 compute-0 wonderful_carson[270252]: 167 167
Oct 10 10:12:24 compute-0 systemd[1]: libpod-4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057.scope: Deactivated successfully.
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.169374856 +0000 UTC m=+0.184123022 container died 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-18ebaddd6ed2d2853fb5fc533606ce032484fea007e5bf6141f52a033444af75-merged.mount: Deactivated successfully.
Oct 10 10:12:24 compute-0 podman[270234]: 2025-10-10 10:12:24.224896033 +0000 UTC m=+0.239644169 container remove 4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:12:24 compute-0 systemd[1]: libpod-conmon-4046b3e4accfd31979372e913201b04a2277e0bf52fa2bcbaa1469fe62f72057.scope: Deactivated successfully.
Oct 10 10:12:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:24 compute-0 nova_compute[261329]: 2025-10-10 10:12:24.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.437480533 +0000 UTC m=+0.047272273 container create 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:12:24 compute-0 systemd[1]: Started libpod-conmon-473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905.scope.
Oct 10 10:12:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.416210453 +0000 UTC m=+0.026002233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.530838641 +0000 UTC m=+0.140630401 container init 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.538135024 +0000 UTC m=+0.147926764 container start 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.54084849 +0000 UTC m=+0.150640230 container attach 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:12:24 compute-0 pedantic_perlman[270293]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:12:24 compute-0 pedantic_perlman[270293]: --> All data devices are unavailable
Oct 10 10:12:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:24 compute-0 systemd[1]: libpod-473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905.scope: Deactivated successfully.
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.899486624 +0000 UTC m=+0.509278464 container died 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 10:12:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6571a602b60203728419e2fea0f82a560c2d4a2fbd2aaf858196accef85dd120-merged.mount: Deactivated successfully.
Oct 10 10:12:24 compute-0 podman[270277]: 2025-10-10 10:12:24.951832558 +0000 UTC m=+0.561624298 container remove 473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_perlman, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:12:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:24 compute-0 systemd[1]: libpod-conmon-473ed4c918c7a133e30023d513dbb8e795e9f562a96277b1338d5a9aa6069905.scope: Deactivated successfully.
Oct 10 10:12:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:25 compute-0 sudo[270167]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:25 compute-0 sudo[270321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:12:25 compute-0 sudo[270321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:25 compute-0 sudo[270321]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:25 compute-0 sudo[270346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:12:25 compute-0 sudo[270346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.542473214 +0000 UTC m=+0.040276629 container create b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:12:25 compute-0 systemd[1]: Started libpod-conmon-b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f.scope.
Oct 10 10:12:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.616506283 +0000 UTC m=+0.114309718 container init b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.523041143 +0000 UTC m=+0.020844588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.625334466 +0000 UTC m=+0.123137881 container start b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.628950191 +0000 UTC m=+0.126753606 container attach b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:12:25 compute-0 great_hypatia[270425]: 167 167
Oct 10 10:12:25 compute-0 systemd[1]: libpod-b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f.scope: Deactivated successfully.
Oct 10 10:12:25 compute-0 conmon[270425]: conmon b13ea2139d205f3e5e95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f.scope/container/memory.events
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.631153182 +0000 UTC m=+0.128956597 container died b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:12:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:25 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a3f96a3e20447f96f516004a1e7de778aa11474325a29e3f3f8d5f033ae740-merged.mount: Deactivated successfully.
Oct 10 10:12:25 compute-0 podman[270410]: 2025-10-10 10:12:25.663827977 +0000 UTC m=+0.161631402 container remove b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hypatia, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:12:25 compute-0 systemd[1]: libpod-conmon-b13ea2139d205f3e5e955c0fb0e0df350d8c1d6db7e09ea9eac5a2250f16ff6f.scope: Deactivated successfully.
Oct 10 10:12:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:25 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facfc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:25 compute-0 podman[270450]: 2025-10-10 10:12:25.838584377 +0000 UTC m=+0.049959079 container create 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:12:25 compute-0 systemd[1]: Started libpod-conmon-13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40.scope.
Oct 10 10:12:25 compute-0 podman[270450]: 2025-10-10 10:12:25.823298099 +0000 UTC m=+0.034672821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823c89897835f32031adcdf7bcb16a03395e8b863ffe226f36d3acc2063fe9a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823c89897835f32031adcdf7bcb16a03395e8b863ffe226f36d3acc2063fe9a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823c89897835f32031adcdf7bcb16a03395e8b863ffe226f36d3acc2063fe9a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823c89897835f32031adcdf7bcb16a03395e8b863ffe226f36d3acc2063fe9a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:25 compute-0 podman[270450]: 2025-10-10 10:12:25.944580669 +0000 UTC m=+0.155955411 container init 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:12:25 compute-0 podman[270450]: 2025-10-10 10:12:25.956622774 +0000 UTC m=+0.167997506 container start 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:12:25 compute-0 podman[270450]: 2025-10-10 10:12:25.962345747 +0000 UTC m=+0.173720459 container attach 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:12:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:26 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:26 compute-0 ceph-mon[73551]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4121639621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:26 compute-0 nifty_faraday[270467]: {
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:     "0": [
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:         {
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "devices": [
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "/dev/loop3"
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             ],
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "lv_name": "ceph_lv0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "lv_size": "21470642176",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "name": "ceph_lv0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "tags": {
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.cluster_name": "ceph",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.crush_device_class": "",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.encrypted": "0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.osd_id": "0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.type": "block",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.vdo": "0",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:                 "ceph.with_tpm": "0"
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             },
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "type": "block",
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:             "vg_name": "ceph_vg0"
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:         }
Oct 10 10:12:26 compute-0 nifty_faraday[270467]:     ]
Oct 10 10:12:26 compute-0 nifty_faraday[270467]: }
Oct 10 10:12:26 compute-0 systemd[1]: libpod-13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40.scope: Deactivated successfully.
Oct 10 10:12:26 compute-0 podman[270450]: 2025-10-10 10:12:26.297916132 +0000 UTC m=+0.509290844 container died 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-823c89897835f32031adcdf7bcb16a03395e8b863ffe226f36d3acc2063fe9a5-merged.mount: Deactivated successfully.
Oct 10 10:12:26 compute-0 podman[270450]: 2025-10-10 10:12:26.351379413 +0000 UTC m=+0.562754125 container remove 13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_faraday, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:12:26 compute-0 systemd[1]: libpod-conmon-13cabfa678250e5b06423028cf9fedddf07b908c6da1d4a46960743cfdde2e40.scope: Deactivated successfully.
Oct 10 10:12:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:12:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:12:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:12:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:12:26 compute-0 sudo[270346]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:26 compute-0 sudo[270493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:12:26 compute-0 sudo[270493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:26 compute-0 sudo[270493]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:26 compute-0 sudo[270518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:12:26 compute-0 sudo[270518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.014603531 +0000 UTC m=+0.041882791 container create 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:12:27 compute-0 systemd[1]: Started libpod-conmon-6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be.scope.
Oct 10 10:12:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.083196756 +0000 UTC m=+0.110476016 container init 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.090559971 +0000 UTC m=+0.117839231 container start 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:26.998765195 +0000 UTC m=+0.026044475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.093539707 +0000 UTC m=+0.120818987 container attach 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:12:27 compute-0 sleepy_khayyam[270599]: 167 167
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.095261152 +0000 UTC m=+0.122540412 container died 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:12:27 compute-0 systemd[1]: libpod-6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be.scope: Deactivated successfully.
Oct 10 10:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1625e071d72b4f968d3e90f17d763451359f4da7cc18837fcb17fee6b3d15e2f-merged.mount: Deactivated successfully.
Oct 10 10:12:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:12:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:12:27 compute-0 podman[270583]: 2025-10-10 10:12:27.131707048 +0000 UTC m=+0.158986308 container remove 6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 10:12:27 compute-0 systemd[1]: libpod-conmon-6882616d33fd4ca8faf653623a8c6810cf4cd5e7ae8e603416182fb3de2f14be.scope: Deactivated successfully.
Oct 10 10:12:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:27.138Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:12:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:27.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:12:27 compute-0 podman[270624]: 2025-10-10 10:12:27.303447782 +0000 UTC m=+0.050297760 container create b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:12:27 compute-0 systemd[1]: Started libpod-conmon-b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14.scope.
Oct 10 10:12:27 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93eb3b75559f55cd0c61295e20b1d9b4346727518cc7efd4efbcb8a11cf91c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93eb3b75559f55cd0c61295e20b1d9b4346727518cc7efd4efbcb8a11cf91c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93eb3b75559f55cd0c61295e20b1d9b4346727518cc7efd4efbcb8a11cf91c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93eb3b75559f55cd0c61295e20b1d9b4346727518cc7efd4efbcb8a11cf91c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:27 compute-0 podman[270624]: 2025-10-10 10:12:27.278795994 +0000 UTC m=+0.025646042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:12:27 compute-0 podman[270624]: 2025-10-10 10:12:27.383225575 +0000 UTC m=+0.130075573 container init b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:12:27 compute-0 podman[270624]: 2025-10-10 10:12:27.389789504 +0000 UTC m=+0.136639502 container start b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 10:12:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:27] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct 10 10:12:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:27] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct 10 10:12:27 compute-0 podman[270624]: 2025-10-10 10:12:27.394749534 +0000 UTC m=+0.141599522 container attach b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 10:12:27 compute-0 nova_compute[261329]: 2025-10-10 10:12:27.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:27 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:27 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:28 compute-0 lvm[270716]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:12:28 compute-0 lvm[270716]: VG ceph_vg0 finished
Oct 10 10:12:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:28 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:28 compute-0 dreamy_shockley[270641]: {}
Oct 10 10:12:28 compute-0 ceph-mon[73551]: pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:28 compute-0 systemd[1]: libpod-b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14.scope: Deactivated successfully.
Oct 10 10:12:28 compute-0 podman[270624]: 2025-10-10 10:12:28.138345023 +0000 UTC m=+0.885194991 container died b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:12:28 compute-0 systemd[1]: libpod-b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14.scope: Consumed 1.277s CPU time.
Oct 10 10:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93eb3b75559f55cd0c61295e20b1d9b4346727518cc7efd4efbcb8a11cf91c5-merged.mount: Deactivated successfully.
Oct 10 10:12:28 compute-0 podman[270624]: 2025-10-10 10:12:28.187736383 +0000 UTC m=+0.934586341 container remove b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:12:28 compute-0 systemd[1]: libpod-conmon-b35c0ac9473fd7e059d42126b1b12297f83c8f3ed64a8763c2c00f0a9dcd1c14.scope: Deactivated successfully.
Oct 10 10:12:28 compute-0 sudo[270518]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:12:28 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:12:28 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:28 compute-0 sudo[270735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:12:28 compute-0 sudo[270735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:28 compute-0 sudo[270735]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:29 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:29 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:29 compute-0 nova_compute[261329]: 2025-10-10 10:12:29.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:29 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:29 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:30 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:30 compute-0 ceph-mon[73551]: pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:31 compute-0 ceph-mon[73551]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:12:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:31 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:31 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:32 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:32 compute-0 nova_compute[261329]: 2025-10-10 10:12:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:32.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:33 compute-0 ceph-mon[73551]: pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:33 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:33 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:34 compute-0 nova_compute[261329]: 2025-10-10 10:12:34.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:34 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3163061778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:34.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:35 compute-0 podman[270767]: 2025-10-10 10:12:35.222504152 +0000 UTC m=+0.063388240 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:12:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101235 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:12:35 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/543711216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:35 compute-0 ceph-mon[73551]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:35 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:35 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:36 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:36.014 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:12:36 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:36.015 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:12:36 compute-0 nova_compute[261329]: 2025-10-10 10:12:36.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:36 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:37.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:12:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:37] "GET /metrics HTTP/1.1" 200 48390 "" "Prometheus/2.51.0"
Oct 10 10:12:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:37] "GET /metrics HTTP/1.1" 200 48390 "" "Prometheus/2.51.0"
Oct 10 10:12:37 compute-0 nova_compute[261329]: 2025-10-10 10:12:37.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:37 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:37 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:37 compute-0 sudo[270788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:37 compute-0 sudo[270788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:37 compute-0 sudo[270788]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:37 compute-0 ceph-mon[73551]: pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:38 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 710 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 10 10:12:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:39 compute-0 nova_compute[261329]: 2025-10-10 10:12:39.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:39 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:39 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:39 compute-0 ceph-mon[73551]: pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 710 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 10 10:12:40 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:40 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 10 10:12:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:41 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:41 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:41 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:41.899 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:41.900 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:41.900 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:41 compute-0 ceph-mon[73551]: pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 10 10:12:42 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:42 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:42 compute-0 nova_compute[261329]: 2025-10-10 10:12:42.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:42.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:43 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:43 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:44 compute-0 ceph-mon[73551]: pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:44 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:12:44.016 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:44 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:12:44 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:44 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:44 compute-0 nova_compute[261329]: 2025-10-10 10:12:44.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:44 compute-0 ovn_controller[153080]: 2025-10-10T10:12:44Z|00036|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 10 10:12:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 10:12:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:44.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 10:12:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:45 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:45 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:45 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:46 compute-0 ceph-mon[73551]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:46 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:12:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:12:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:47.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:47] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:12:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:47] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:12:47 compute-0 nova_compute[261329]: 2025-10-10 10:12:47.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:47 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:48 compute-0 ceph-mon[73551]: pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:48 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:48 compute-0 nova_compute[261329]: 2025-10-10 10:12:48.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 10 10:12:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:12:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.266 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.266 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:49 compute-0 nova_compute[261329]: 2025-10-10 10:12:49.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:49 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:49 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:49 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:50 compute-0 ceph-mon[73551]: pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 10 10:12:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:50 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:12:50 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:50 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:50 compute-0 nova_compute[261329]: 2025-10-10 10:12:50.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 937 B/s wr, 44 op/s
Oct 10 10:12:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:51 compute-0 nova_compute[261329]: 2025-10-10 10:12:51.232 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:51 compute-0 nova_compute[261329]: 2025-10-10 10:12:51.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:51 compute-0 nova_compute[261329]: 2025-10-10 10:12:51.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:12:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:51 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:51 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:51 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:52 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:52 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:52 compute-0 ceph-mon[73551]: pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 937 B/s wr, 44 op/s
Oct 10 10:12:52 compute-0 podman[270830]: 2025-10-10 10:12:52.232189144 +0000 UTC m=+0.079903900 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:12:52 compute-0 podman[270831]: 2025-10-10 10:12:52.232303428 +0000 UTC m=+0.077449262 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.261 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.261 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.261 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.262 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.262 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:52 compute-0 podman[270832]: 2025-10-10 10:12:52.265252519 +0000 UTC m=+0.103935116 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175062627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.688 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.875 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.876 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.876 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.877 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.931 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.931 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:12:52 compute-0 nova_compute[261329]: 2025-10-10 10:12:52.947 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:52.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1175062627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881918671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:53 compute-0 nova_compute[261329]: 2025-10-10 10:12:53.389 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:53 compute-0 nova_compute[261329]: 2025-10-10 10:12:53.397 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:12:53 compute-0 nova_compute[261329]: 2025-10-10 10:12:53.412 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:12:53 compute-0 nova_compute[261329]: 2025-10-10 10:12:53.434 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:12:53 compute-0 nova_compute[261329]: 2025-10-10 10:12:53.434 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:53 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:53 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:53 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:54 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:54 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:54 compute-0 ceph-mon[73551]: pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 10 10:12:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4031376606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/881918671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1943454956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:54 compute-0 nova_compute[261329]: 2025-10-10 10:12:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:54 compute-0 nova_compute[261329]: 2025-10-10 10:12:54.434 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:12:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:54.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:12:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1031806664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101255 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:12:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:55 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:55 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:55 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:56 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:56 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:56 compute-0 ceph-mon[73551]: pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2515670112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:57.143Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:12:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:12:57.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:12:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:57] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:12:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:12:57] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:12:57 compute-0 nova_compute[261329]: 2025-10-10 10:12:57.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:57 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:57 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:58 compute-0 sudo[270943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:58 compute-0 sudo[270943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:58 compute-0 sudo[270943]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:58 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:58 compute-0 ceph-mon[73551]: pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 10 10:12:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:12:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:12:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:12:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:59 compute-0 nova_compute[261329]: 2025-10-10 10:12:59.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:59 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:12:59 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:00 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:00 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:00 compute-0 ceph-mon[73551]: pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 10 10:13:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:13:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:00.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:13:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:01 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:01 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:01 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80041d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:02 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:02 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:02 compute-0 ceph-mon[73551]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:13:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:02 compute-0 nova_compute[261329]: 2025-10-10 10:13:02.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:13:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:02.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:03.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:03 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:03 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:03 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:04 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:04 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80041d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:04 compute-0 ceph-mon[73551]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:13:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:04 compute-0 nova_compute[261329]: 2025-10-10 10:13:04.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:04.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:05 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:05 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:05 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:06 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:06 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:06 compute-0 podman[270979]: 2025-10-10 10:13:06.216370535 +0000 UTC m=+0.064750676 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:13:06 compute-0 ceph-mon[73551]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:07.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:13:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:07] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 10 10:13:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:07] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 10 10:13:07 compute-0 nova_compute[261329]: 2025-10-10 10:13:07.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:07 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80041d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:07 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:08 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:08 compute-0 ceph-mon[73551]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:13:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:09 compute-0 nova_compute[261329]: 2025-10-10 10:13:09.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:09 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:09 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:09 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:10 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:10 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:10 compute-0 ceph-mon[73551]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:13:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 10 10:13:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101311 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:13:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:11 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:11 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:12 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:12 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:12 compute-0 ceph-mon[73551]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 10 10:13:12 compute-0 nova_compute[261329]: 2025-10-10 10:13:12.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 10 10:13:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:13 compute-0 ceph-mon[73551]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 10 10:13:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:13 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:13 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:13 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:14 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:14 compute-0 nova_compute[261329]: 2025-10-10 10:13:14.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:15 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:15 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:15 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:15 compute-0 ceph-mon[73551]: pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:16 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:16 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:13:16
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.mgr', '.nfs', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.control', 'images', 'vms', '.rgw.root', 'default.rgw.meta']
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:13:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:13:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:13:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:17.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:13:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:13:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:13:17 compute-0 nova_compute[261329]: 2025-10-10 10:13:17.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:17 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:17 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:18 compute-0 ceph-mon[73551]: pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2423340283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:18 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:18 compute-0 sudo[271010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:18 compute-0 sudo[271010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:18 compute-0 sudo[271010]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:13:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5786 writes, 25K keys, 5786 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 5786 writes, 5786 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1570 writes, 6644 keys, 1570 commit groups, 1.0 writes per commit group, ingest: 11.25 MB, 0.02 MB/s
                                           Interval WAL: 1570 writes, 1570 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    124.6      0.32              0.13        14    0.023       0      0       0.0       0.0
                                             L6      1/0   11.88 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    160.5    137.4      1.19              0.49        13    0.091     67K   6892       0.0       0.0
                                            Sum      1/0   11.88 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.1    126.8    134.7      1.50              0.62        27    0.056     67K   6892       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6    149.4    147.0      0.49              0.21        10    0.049     29K   2556       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    160.5    137.4      1.19              0.49        13    0.091     67K   6892       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.9      0.31              0.13        13    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b2d7d9350#2 capacity: 304.00 MB usage: 14.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000192 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(814,14.30 MB,4.70478%) FilterBlock(28,201.05 KB,0.0645838%) IndexBlock(28,348.70 KB,0.112017%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:13:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.9 KiB/s wr, 29 op/s
Oct 10 10:13:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:18.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:19 compute-0 nova_compute[261329]: 2025-10-10 10:13:19.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:19 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:19 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:19 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:20 compute-0 ceph-mon[73551]: pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.9 KiB/s wr, 29 op/s
Oct 10 10:13:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:20 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:13:20 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:20 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Oct 10 10:13:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:20.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:21.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:21 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:21 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:21 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:22 compute-0 ceph-mon[73551]: pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Oct 10 10:13:22 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:22 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:22 compute-0 nova_compute[261329]: 2025-10-10 10:13:22.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 30 op/s
Oct 10 10:13:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:13:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:13:23 compute-0 podman[271042]: 2025-10-10 10:13:23.244180478 +0000 UTC m=+0.086822790 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:13:23 compute-0 podman[271041]: 2025-10-10 10:13:23.246734099 +0000 UTC m=+0.086277102 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:13:23 compute-0 podman[271043]: 2025-10-10 10:13:23.258255797 +0000 UTC m=+0.101587330 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:13:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:23 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:23 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:24 compute-0 ceph-mon[73551]: pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 30 op/s
Oct 10 10:13:24 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:24 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:24 compute-0 nova_compute[261329]: 2025-10-10 10:13:24.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:25 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:25 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:25 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:26 compute-0 ceph-mon[73551]: pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:26 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:13:26 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:26 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:13:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:13:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:13:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:13:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:27.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:13:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:13:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:27.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:13:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:13:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:13:27 compute-0 nova_compute[261329]: 2025-10-10 10:13:27.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:27 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:27 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:28 compute-0 ceph-mon[73551]: pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:28 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:28 compute-0 sudo[271104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:13:28 compute-0 sudo[271104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:28 compute-0 sudo[271104]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:28 compute-0 sudo[271129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:13:28 compute-0 sudo[271129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.8 KiB/s wr, 31 op/s
Oct 10 10:13:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:29 compute-0 sudo[271129]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:29 compute-0 nova_compute[261329]: 2025-10-10 10:13:29.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:13:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:13:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:29 compute-0 sudo[271185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:13:29 compute-0 sudo[271185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:29 compute-0 sudo[271185]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:29 compute-0 sudo[271210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:13:29 compute-0 sudo[271210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:29 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:29 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:29 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.015485891 +0000 UTC m=+0.051925648 container create 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:30 compute-0 systemd[1]: Started libpod-conmon-544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58.scope.
Oct 10 10:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:29.993139588 +0000 UTC m=+0.029579395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.093653454 +0000 UTC m=+0.130093231 container init 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.100355387 +0000 UTC m=+0.136795154 container start 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.104635354 +0000 UTC m=+0.141075111 container attach 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 10:13:30 compute-0 systemd[1]: libpod-544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58.scope: Deactivated successfully.
Oct 10 10:13:30 compute-0 dreamy_shirley[271293]: 167 167
Oct 10 10:13:30 compute-0 conmon[271293]: conmon 544b180b7c0445cf67da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58.scope/container/memory.events
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.10731513 +0000 UTC m=+0.143754897 container died 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcfb3b2c78f7aabaf36ab296d59e097ad2868ab392840e5bd8b2d75e6b047eed-merged.mount: Deactivated successfully.
Oct 10 10:13:30 compute-0 ceph-mon[73551]: pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.8 KiB/s wr, 31 op/s
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:13:30 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:30 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:30 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:30 compute-0 podman[271277]: 2025-10-10 10:13:30.148911066 +0000 UTC m=+0.185350823 container remove 544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shirley, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 10:13:30 compute-0 systemd[1]: libpod-conmon-544b180b7c0445cf67da789ff9a141206359358a5753d71b12f179ba37cc6e58.scope: Deactivated successfully.
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.361421264 +0000 UTC m=+0.060909024 container create 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 10:13:30 compute-0 systemd[1]: Started libpod-conmon-80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6.scope.
Oct 10 10:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.341467047 +0000 UTC m=+0.040954807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.447791888 +0000 UTC m=+0.147279628 container init 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.458775228 +0000 UTC m=+0.158262968 container start 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.462244469 +0000 UTC m=+0.161732219 container attach 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:13:30 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:13:30 compute-0 sad_lumiere[271335]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:13:30 compute-0 sad_lumiere[271335]: --> All data devices are unavailable
Oct 10 10:13:30 compute-0 systemd[1]: libpod-80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6.scope: Deactivated successfully.
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.814906057 +0000 UTC m=+0.514393797 container died 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a602b49d8096feb5f40e31c21ee26a4092bd1c9aaf2f9acf9f143bd27075983-merged.mount: Deactivated successfully.
Oct 10 10:13:30 compute-0 podman[271318]: 2025-10-10 10:13:30.855626926 +0000 UTC m=+0.555114676 container remove 80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:13:30 compute-0 systemd[1]: libpod-conmon-80b4e8d3758ed5033ecc3d85d1c03bf98439e435167ab2c5bfc5addc542d30a6.scope: Deactivated successfully.
Oct 10 10:13:30 compute-0 sudo[271210]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:30 compute-0 sudo[271364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:13:30 compute-0 sudo[271364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:30 compute-0 sudo[271364]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:31 compute-0 sudo[271389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:13:31 compute-0 sudo[271389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101331 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:13:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:13:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.456247952 +0000 UTC m=+0.038059144 container create 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 10:13:31 compute-0 systemd[1]: Started libpod-conmon-9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1.scope.
Oct 10 10:13:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.440385057 +0000 UTC m=+0.022196299 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.53863435 +0000 UTC m=+0.120445632 container init 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.54553629 +0000 UTC m=+0.127347492 container start 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.549077113 +0000 UTC m=+0.130888405 container attach 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:13:31 compute-0 elated_noyce[271472]: 167 167
Oct 10 10:13:31 compute-0 systemd[1]: libpod-9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1.scope: Deactivated successfully.
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.55339238 +0000 UTC m=+0.135203622 container died 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-21a969497386210b01c930cbae46fc0a9ef6e86577d8989eff97c6876f8af9ab-merged.mount: Deactivated successfully.
Oct 10 10:13:31 compute-0 podman[271455]: 2025-10-10 10:13:31.601645419 +0000 UTC m=+0.183456611 container remove 9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:13:31 compute-0 systemd[1]: libpod-conmon-9a487b36da0bb2fc7fe3f5766a87a1a25f0d2815d01b5d5408dffc12c349eeb1.scope: Deactivated successfully.
Oct 10 10:13:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:31 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:31 compute-0 podman[271497]: 2025-10-10 10:13:31.795604776 +0000 UTC m=+0.046324879 container create fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 10:13:31 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:31 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:31 compute-0 systemd[1]: Started libpod-conmon-fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff.scope.
Oct 10 10:13:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b287acac6f169dfecd55066559bce1808da7e1607fa0323ee9e4902b95f0e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b287acac6f169dfecd55066559bce1808da7e1607fa0323ee9e4902b95f0e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b287acac6f169dfecd55066559bce1808da7e1607fa0323ee9e4902b95f0e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b287acac6f169dfecd55066559bce1808da7e1607fa0323ee9e4902b95f0e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:31 compute-0 podman[271497]: 2025-10-10 10:13:31.775496544 +0000 UTC m=+0.026216677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:31 compute-0 podman[271497]: 2025-10-10 10:13:31.87695727 +0000 UTC m=+0.127677443 container init fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:13:31 compute-0 podman[271497]: 2025-10-10 10:13:31.889706627 +0000 UTC m=+0.140426720 container start fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:13:31 compute-0 podman[271497]: 2025-10-10 10:13:31.894789399 +0000 UTC m=+0.145509512 container attach fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:13:32 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:32 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:32 compute-0 ceph-mon[73551]: pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:32 compute-0 sharp_johnson[271513]: {
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:     "0": [
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:         {
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "devices": [
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "/dev/loop3"
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             ],
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "lv_name": "ceph_lv0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "lv_size": "21470642176",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "name": "ceph_lv0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "tags": {
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.cluster_name": "ceph",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.crush_device_class": "",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.encrypted": "0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.osd_id": "0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.type": "block",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.vdo": "0",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:                 "ceph.with_tpm": "0"
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             },
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "type": "block",
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:             "vg_name": "ceph_vg0"
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:         }
Oct 10 10:13:32 compute-0 sharp_johnson[271513]:     ]
Oct 10 10:13:32 compute-0 sharp_johnson[271513]: }
Oct 10 10:13:32 compute-0 systemd[1]: libpod-fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff.scope: Deactivated successfully.
Oct 10 10:13:32 compute-0 podman[271497]: 2025-10-10 10:13:32.195782839 +0000 UTC m=+0.446502952 container died fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-01b287acac6f169dfecd55066559bce1808da7e1607fa0323ee9e4902b95f0e1-merged.mount: Deactivated successfully.
Oct 10 10:13:32 compute-0 podman[271497]: 2025-10-10 10:13:32.249157601 +0000 UTC m=+0.499877704 container remove fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:13:32 compute-0 systemd[1]: libpod-conmon-fbfe3ac1c26fffa1fb3348baa2507c0a4a5ab1157252729d0dadb1b6e439daff.scope: Deactivated successfully.
Oct 10 10:13:32 compute-0 sudo[271389]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:32 compute-0 sudo[271536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:13:32 compute-0 sudo[271536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:32 compute-0 sudo[271536]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:32 compute-0 sudo[271561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:13:32 compute-0 sudo[271561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:32 compute-0 nova_compute[261329]: 2025-10-10 10:13:32.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.842207486 +0000 UTC m=+0.040944207 container create de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 10:13:32 compute-0 systemd[1]: Started libpod-conmon-de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d.scope.
Oct 10 10:13:32 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.91758471 +0000 UTC m=+0.116321441 container init de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.826238807 +0000 UTC m=+0.024975518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.927201457 +0000 UTC m=+0.125938178 container start de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:13:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.931852975 +0000 UTC m=+0.130589686 container attach de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct 10 10:13:32 compute-0 musing_boyd[271643]: 167 167
Oct 10 10:13:32 compute-0 systemd[1]: libpod-de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d.scope: Deactivated successfully.
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.933993663 +0000 UTC m=+0.132730384 container died de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f72fbf74eae08ca1fec30a0f8672ba577d6244cd294c20ff6019f90aedece51-merged.mount: Deactivated successfully.
Oct 10 10:13:32 compute-0 podman[271626]: 2025-10-10 10:13:32.974294689 +0000 UTC m=+0.173031400 container remove de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_boyd, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:13:32 compute-0 systemd[1]: libpod-conmon-de08f13881ea97f2dbbdaec14f0f31946099a428e13404d6f8f9b6958eb3e30d.scope: Deactivated successfully.
Oct 10 10:13:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:33 compute-0 podman[271667]: 2025-10-10 10:13:33.206248767 +0000 UTC m=+0.058257349 container create 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:33 compute-0 systemd[1]: Started libpod-conmon-0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82.scope.
Oct 10 10:13:33 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514ff1359f2fc9cd1ff25925a64c0e62e4347668669fddfa4487985ae6abcb2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514ff1359f2fc9cd1ff25925a64c0e62e4347668669fddfa4487985ae6abcb2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514ff1359f2fc9cd1ff25925a64c0e62e4347668669fddfa4487985ae6abcb2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514ff1359f2fc9cd1ff25925a64c0e62e4347668669fddfa4487985ae6abcb2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:33 compute-0 podman[271667]: 2025-10-10 10:13:33.189450971 +0000 UTC m=+0.041459573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:13:33 compute-0 podman[271667]: 2025-10-10 10:13:33.295419131 +0000 UTC m=+0.147427753 container init 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:13:33 compute-0 podman[271667]: 2025-10-10 10:13:33.306093391 +0000 UTC m=+0.158101983 container start 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:13:33 compute-0 podman[271667]: 2025-10-10 10:13:33.30981928 +0000 UTC m=+0.161827882 container attach 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:13:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:33 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:33 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:33 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:33 compute-0 lvm[271757]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:13:33 compute-0 lvm[271757]: VG ceph_vg0 finished
Oct 10 10:13:33 compute-0 nifty_darwin[271683]: {}
Oct 10 10:13:34 compute-0 systemd[1]: libpod-0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82.scope: Deactivated successfully.
Oct 10 10:13:34 compute-0 systemd[1]: libpod-0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82.scope: Consumed 1.091s CPU time.
Oct 10 10:13:34 compute-0 podman[271667]: 2025-10-10 10:13:34.010444586 +0000 UTC m=+0.862453178 container died 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-514ff1359f2fc9cd1ff25925a64c0e62e4347668669fddfa4487985ae6abcb2f-merged.mount: Deactivated successfully.
Oct 10 10:13:34 compute-0 podman[271667]: 2025-10-10 10:13:34.058906481 +0000 UTC m=+0.910915073 container remove 0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:13:34 compute-0 systemd[1]: libpod-conmon-0c704443366c3cbe6c95e03dbdfb137e94cd6f32a3d0d154983e57e727d5fb82.scope: Deactivated successfully.
Oct 10 10:13:34 compute-0 sudo[271561]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:13:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:13:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:34 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:34 compute-0 ceph-mon[73551]: pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-0 sudo[271773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:13:34 compute-0 sudo[271773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:34 compute-0 sudo[271773]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:34 compute-0 nova_compute[261329]: 2025-10-10 10:13:34.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:35 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:35 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:35 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:36 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:36 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facdc0040b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:36 compute-0 ceph-mon[73551]: pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:37.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:13:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:37.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:13:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:37.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:13:37 compute-0 podman[271801]: 2025-10-10 10:13:37.251678871 +0000 UTC m=+0.088545235 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:13:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:37] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Oct 10 10:13:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:37] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Oct 10 10:13:37 compute-0 nova_compute[261329]: 2025-10-10 10:13:37.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo[269442]: 10/10/2025 10:13:37 : epoch 68e8dbd6 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facd8004450 fd 38 proxy ignored for local
Oct 10 10:13:37 compute-0 kernel: ganesha.nfsd[271491]: segfault at 50 ip 00007fadb669a32e sp 00007fad6effc210 error 4 in libntirpc.so.5.8[7fadb667f000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 10:13:37 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:13:37 compute-0 systemd[1]: Started Process Core Dump (PID 271820/UID 0).
Oct 10 10:13:37 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:37.856 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:37 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:37.857 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:13:37 compute-0 nova_compute[261329]: 2025-10-10 10:13:37.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:38 compute-0 ceph-mon[73551]: pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:38 compute-0 sudo[271823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:38 compute-0 sudo[271823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:38 compute-0 sudo[271823]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:38 compute-0 systemd-coredump[271821]: Process 269449 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007fadb669a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:13:38 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:38.859 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:38 compute-0 systemd[1]: systemd-coredump@11-271820-0.service: Deactivated successfully.
Oct 10 10:13:38 compute-0 systemd[1]: systemd-coredump@11-271820-0.service: Consumed 1.089s CPU time.
Oct 10 10:13:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:38 compute-0 podman[271853]: 2025-10-10 10:13:38.936334251 +0000 UTC m=+0.027088144 container died a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-93478cea2b1c03d94252e28789b4c615eda6c170f3f7aed45cfa2d985be11b79-merged.mount: Deactivated successfully.
Oct 10 10:13:38 compute-0 podman[271853]: 2025-10-10 10:13:38.970181161 +0000 UTC m=+0.060935034 container remove a34e5698492f6c796f8e4b7a1a1d6e29a3ba76acdaa77cfab961b8c8b00a25b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-2-0-compute-0-ruydzo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:13:38 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:13:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:39.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:13:39 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.585s CPU time.
Oct 10 10:13:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:39 compute-0 nova_compute[261329]: 2025-10-10 10:13:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:40 compute-0 ceph-mon[73551]: pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:13:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:41.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:41.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:41.901 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:41.902 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:13:41.902 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:42 compute-0 ceph-mon[73551]: pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:13:42 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1897484170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:42 compute-0 nova_compute[261329]: 2025-10-10 10:13:42.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:43.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:43.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:43 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101343 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:13:44 compute-0 ceph-mon[73551]: pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:44 compute-0 nova_compute[261329]: 2025-10-10 10:13:44.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:45.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:45 compute-0 ceph-mon[73551]: pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:13:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:13:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:47.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:13:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1301345308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:47 compute-0 ceph-mon[73551]: pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2845144114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:13:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:13:47 compute-0 nova_compute[261329]: 2025-10-10 10:13:47.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:49.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:49 compute-0 nova_compute[261329]: 2025-10-10 10:13:49.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:49 compute-0 nova_compute[261329]: 2025-10-10 10:13:49.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:13:49 compute-0 nova_compute[261329]: 2025-10-10 10:13:49.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:13:49 compute-0 nova_compute[261329]: 2025-10-10 10:13:49.266 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:13:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:49 compute-0 nova_compute[261329]: 2025-10-10 10:13:49.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:49 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Scheduled restart job, restart counter is at 12.
Oct 10 10:13:49 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:13:49 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Consumed 1.585s CPU time.
Oct 10 10:13:49 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Start request repeated too quickly.
Oct 10 10:13:49 compute-0 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.2.0.compute-0.ruydzo.service: Failed with result 'exit-code'.
Oct 10 10:13:49 compute-0 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.ruydzo for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:13:50 compute-0 ceph-mon[73551]: pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:50 compute-0 nova_compute[261329]: 2025-10-10 10:13:50.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:50 compute-0 nova_compute[261329]: 2025-10-10 10:13:50.271 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:51.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:13:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:51.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:13:51 compute-0 nova_compute[261329]: 2025-10-10 10:13:51.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:51 compute-0 nova_compute[261329]: 2025-10-10 10:13:51.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:51 compute-0 nova_compute[261329]: 2025-10-10 10:13:51.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:52 compute-0 ceph-mon[73551]: pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:52 compute-0 nova_compute[261329]: 2025-10-10 10:13:52.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:52 compute-0 nova_compute[261329]: 2025-10-10 10:13:52.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:13:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:53.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:53.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.239 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.264 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.264 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.265 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533079177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.725 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.875 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.877 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4655MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.877 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.877 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.941 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.941 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:13:53 compute-0 nova_compute[261329]: 2025-10-10 10:13:53.959 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:54 compute-0 ceph-mon[73551]: pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:13:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2920225613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1533079177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3365250461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-0 podman[271955]: 2025-10-10 10:13:54.218478557 +0000 UTC m=+0.064627363 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 10:13:54 compute-0 podman[271956]: 2025-10-10 10:13:54.225251464 +0000 UTC m=+0.062597329 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 10 10:13:54 compute-0 podman[271957]: 2025-10-10 10:13:54.247475393 +0000 UTC m=+0.086335365 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 10 10:13:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769663104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.395 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.400 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.415 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.417 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:13:54 compute-0 nova_compute[261329]: 2025-10-10 10:13:54.417 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:55.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1769663104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:56 compute-0 ceph-mon[73551]: pgmap v839: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1438646568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2710916285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:13:57.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:13:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:13:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:13:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:13:57 compute-0 nova_compute[261329]: 2025-10-10 10:13:57.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:58 compute-0 ceph-mon[73551]: pgmap v840: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:58 compute-0 sudo[272027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:58 compute-0 sudo[272027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:58 compute-0 sudo[272027]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:59.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:13:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:13:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:13:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:59 compute-0 nova_compute[261329]: 2025-10-10 10:13:59.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 10 10:14:00 compute-0 ceph-mon[73551]: pgmap v841: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:14:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 10 10:14:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:14:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:02 compute-0 ceph-mon[73551]: pgmap v842: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 10 10:14:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:02 compute-0 nova_compute[261329]: 2025-10-10 10:14:02.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:14:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:03.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:04 compute-0 ceph-mon[73551]: pgmap v843: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:14:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:04 compute-0 nova_compute[261329]: 2025-10-10 10:14:04.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:05.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:06 compute-0 ceph-mon[73551]: pgmap v844: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:07.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:07.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:14:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:07.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:14:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:07.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:14:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:07 compute-0 nova_compute[261329]: 2025-10-10 10:14:07.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:08 compute-0 ceph-mon[73551]: pgmap v845: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:08 compute-0 podman[272062]: 2025-10-10 10:14:08.207155124 +0000 UTC m=+0.049654616 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:14:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:09 compute-0 nova_compute[261329]: 2025-10-10 10:14:09.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:10 compute-0 ceph-mon[73551]: pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:12 compute-0 ceph-mon[73551]: pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:12 compute-0 nova_compute[261329]: 2025-10-10 10:14:12.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 10 10:14:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:13.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:14 compute-0 ceph-mon[73551]: pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 10 10:14:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:14 compute-0 nova_compute[261329]: 2025-10-10 10:14:14.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:14 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101414 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:14:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:15.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:16 compute-0 ceph-mon[73551]: pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:14:16
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.meta', '.nfs']
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:14:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:14:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:14:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:17.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:17.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:14:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:17.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:14:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:17.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:14:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:14:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:14:17 compute-0 nova_compute[261329]: 2025-10-10 10:14:17.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:18 compute-0 ceph-mon[73551]: pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3515183494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:18 compute-0 sudo[272090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:18 compute-0 sudo[272090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:18 compute-0 sudo[272090]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 5 op/s
Oct 10 10:14:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:19.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:19 compute-0 nova_compute[261329]: 2025-10-10 10:14:19.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:20 compute-0 ceph-mon[73551]: pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 5 op/s
Oct 10 10:14:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 16 KiB/s wr, 0 op/s
Oct 10 10:14:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:21.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:22 compute-0 ceph-mon[73551]: pgmap v852: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 16 KiB/s wr, 0 op/s
Oct 10 10:14:22 compute-0 nova_compute[261329]: 2025-10-10 10:14:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 10 10:14:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:23.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:23.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1953104862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:14:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1711456005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:14:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:24 compute-0 ceph-mon[73551]: pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 10 10:14:24 compute-0 nova_compute[261329]: 2025-10-10 10:14:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:25.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:25.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:25 compute-0 podman[272122]: 2025-10-10 10:14:25.214245115 +0000 UTC m=+0.059532871 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:14:25 compute-0 podman[272123]: 2025-10-10 10:14:25.253664883 +0000 UTC m=+0.093523136 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 10:14:25 compute-0 podman[272124]: 2025-10-10 10:14:25.261223983 +0000 UTC m=+0.089651221 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:14:25 compute-0 ceph-mon[73551]: pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2445054076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:14:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2445054076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:14:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:27.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:14:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:14:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:14:27 compute-0 ceph-mon[73551]: pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:27 compute-0 nova_compute[261329]: 2025-10-10 10:14:27.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:29.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:29 compute-0 nova_compute[261329]: 2025-10-10 10:14:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:30 compute-0 ceph-mon[73551]: pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:14:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:32 compute-0 ceph-mon[73551]: pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:32 compute-0 nova_compute[261329]: 2025-10-10 10:14:32.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 10 10:14:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:34 compute-0 ceph-mon[73551]: pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 10 10:14:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:34 compute-0 nova_compute[261329]: 2025-10-10 10:14:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:34 compute-0 sudo[272196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:34 compute-0 sudo[272196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:34 compute-0 sudo[272196]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:34 compute-0 sudo[272221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:14:34 compute-0 sudo[272221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:34 compute-0 sudo[272221]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:14:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:14:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:14:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:14:34 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:34 compute-0 sudo[272269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:34 compute-0 sudo[272269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:34 compute-0 sudo[272269]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:35 compute-0 sudo[272294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:14:35 compute-0 sudo[272294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:35.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:35 compute-0 sudo[272294]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:14:35 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: pgmap v859: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:14:35 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:35 compute-0 sudo[272352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:35 compute-0 sudo[272352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:35 compute-0 sudo[272352]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:35 compute-0 sudo[272377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:14:35 compute-0 sudo[272377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.454867824 +0000 UTC m=+0.056427411 container create 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:14:36 compute-0 systemd[1]: Started libpod-conmon-3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6.scope.
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.429894197 +0000 UTC m=+0.031453784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.551535308 +0000 UTC m=+0.153094875 container init 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.558509601 +0000 UTC m=+0.160069148 container start 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.561884718 +0000 UTC m=+0.163444355 container attach 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:14:36 compute-0 frosty_neumann[272463]: 167 167
Oct 10 10:14:36 compute-0 systemd[1]: libpod-3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6.scope: Deactivated successfully.
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.567361302 +0000 UTC m=+0.168920879 container died 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d5cf08b33a48cbd92f4c621e47a0a93251e7aadbe92c86731dc990dae092b73-merged.mount: Deactivated successfully.
Oct 10 10:14:36 compute-0 podman[272447]: 2025-10-10 10:14:36.613510165 +0000 UTC m=+0.215069712 container remove 3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:14:36 compute-0 systemd[1]: libpod-conmon-3172babc15a27287d840d26a9a4788cf29cc69a4b518bb82ea67f89e268328c6.scope: Deactivated successfully.
Oct 10 10:14:36 compute-0 podman[272485]: 2025-10-10 10:14:36.800867932 +0000 UTC m=+0.048104426 container create 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:14:36 compute-0 systemd[1]: Started libpod-conmon-38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c.scope.
Oct 10 10:14:36 compute-0 podman[272485]: 2025-10-10 10:14:36.781737332 +0000 UTC m=+0.028973826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:36 compute-0 podman[272485]: 2025-10-10 10:14:36.907524095 +0000 UTC m=+0.154760599 container init 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:14:36 compute-0 podman[272485]: 2025-10-10 10:14:36.926142099 +0000 UTC m=+0.173378603 container start 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:14:36 compute-0 podman[272485]: 2025-10-10 10:14:36.929985632 +0000 UTC m=+0.177222106 container attach 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:14:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:37.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:14:37 compute-0 busy_ptolemy[272502]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:14:37 compute-0 busy_ptolemy[272502]: --> All data devices are unavailable
Oct 10 10:14:37 compute-0 systemd[1]: libpod-38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c.scope: Deactivated successfully.
Oct 10 10:14:37 compute-0 podman[272485]: 2025-10-10 10:14:37.380861606 +0000 UTC m=+0.628098110 container died 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:14:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-40a757ad60713884f1b4438f1ddf0b072fb60dd98134f50ceb16776a1a4e5464-merged.mount: Deactivated successfully.
Oct 10 10:14:37 compute-0 podman[272485]: 2025-10-10 10:14:37.427422712 +0000 UTC m=+0.674659186 container remove 38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:14:37 compute-0 systemd[1]: libpod-conmon-38fdcd12bc1e197bc4803b2fd66dc4d30974ab2e627ad0aa9d95f6fa23ecf56c.scope: Deactivated successfully.
Oct 10 10:14:37 compute-0 sudo[272377]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:37 compute-0 sudo[272531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:37 compute-0 sudo[272531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:37 compute-0 sudo[272531]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:37 compute-0 sudo[272556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:14:37 compute-0 sudo[272556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:37 compute-0 nova_compute[261329]: 2025-10-10 10:14:37.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.01852212 +0000 UTC m=+0.041136663 container create d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 10 10:14:38 compute-0 ceph-mon[73551]: pgmap v860: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:38 compute-0 systemd[1]: Started libpod-conmon-d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad.scope.
Oct 10 10:14:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.091910091 +0000 UTC m=+0.114524604 container init d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.001838738 +0000 UTC m=+0.024453271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.09780996 +0000 UTC m=+0.120424463 container start d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.100972251 +0000 UTC m=+0.123586754 container attach d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:14:38 compute-0 nervous_shockley[272639]: 167 167
Oct 10 10:14:38 compute-0 systemd[1]: libpod-d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad.scope: Deactivated successfully.
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.102980935 +0000 UTC m=+0.125595438 container died d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-69b9b70da4ae8bb637709e01a5b10e796a52d45349f227aa2ac660f0a291bcf6-merged.mount: Deactivated successfully.
Oct 10 10:14:38 compute-0 podman[272623]: 2025-10-10 10:14:38.136436422 +0000 UTC m=+0.159050915 container remove d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_shockley, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:14:38 compute-0 systemd[1]: libpod-conmon-d4c48ccd0618c1f5c2fc09fc897116f7c7264f5a50569d9558b738ad5b324fad.scope: Deactivated successfully.
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.309607777 +0000 UTC m=+0.058227309 container create df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:14:38 compute-0 systemd[1]: Started libpod-conmon-df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf.scope.
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.277683539 +0000 UTC m=+0.026303161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:38 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/105eae9159a08fae81180cd17a095bf8dbcd74593686cab099e2d19c749fb887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/105eae9159a08fae81180cd17a095bf8dbcd74593686cab099e2d19c749fb887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/105eae9159a08fae81180cd17a095bf8dbcd74593686cab099e2d19c749fb887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/105eae9159a08fae81180cd17a095bf8dbcd74593686cab099e2d19c749fb887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.404443252 +0000 UTC m=+0.153062784 container init df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.41596098 +0000 UTC m=+0.164580512 container start df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.421028711 +0000 UTC m=+0.169648263 container attach df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:14:38 compute-0 podman[272680]: 2025-10-10 10:14:38.440181123 +0000 UTC m=+0.082491833 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:14:38 compute-0 sudo[272707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:38 compute-0 sudo[272707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:38 compute-0 sudo[272707]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:38 compute-0 keen_ritchie[272683]: {
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:     "0": [
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:         {
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "devices": [
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "/dev/loop3"
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             ],
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "lv_name": "ceph_lv0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "lv_size": "21470642176",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "name": "ceph_lv0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "tags": {
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.cluster_name": "ceph",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.crush_device_class": "",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.encrypted": "0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.osd_id": "0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.type": "block",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.vdo": "0",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:                 "ceph.with_tpm": "0"
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             },
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "type": "block",
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:             "vg_name": "ceph_vg0"
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:         }
Oct 10 10:14:38 compute-0 keen_ritchie[272683]:     ]
Oct 10 10:14:38 compute-0 keen_ritchie[272683]: }
Oct 10 10:14:38 compute-0 systemd[1]: libpod-df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf.scope: Deactivated successfully.
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.708046469 +0000 UTC m=+0.456666001 container died df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-105eae9159a08fae81180cd17a095bf8dbcd74593686cab099e2d19c749fb887-merged.mount: Deactivated successfully.
Oct 10 10:14:38 compute-0 podman[272666]: 2025-10-10 10:14:38.757872798 +0000 UTC m=+0.506492360 container remove df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_ritchie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 10:14:38 compute-0 systemd[1]: libpod-conmon-df4828383b52d5f4540ebcbdd7a9dd6a1dbe09ce288ac241169e3b54761f35bf.scope: Deactivated successfully.
Oct 10 10:14:38 compute-0 sudo[272556]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:38 compute-0 sudo[272748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:38 compute-0 sudo[272748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:38 compute-0 sudo[272748]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 10 10:14:38 compute-0 sudo[272773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:14:38 compute-0 sudo[272773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [WARNING] 282/101439 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:14:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [NOTICE] 282/101439 (4) : haproxy version is 2.3.17-d1c9119
Oct 10 10:14:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [NOTICE] 282/101439 (4) : path to executable is /usr/local/sbin/haproxy
Oct 10 10:14:39 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb[97009]: [ALERT] 282/101439 (4) : backend 'backend' has no server available!
Oct 10 10:14:39 compute-0 nova_compute[261329]: 2025-10-10 10:14:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.409851938 +0000 UTC m=+0.045216743 container create 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:14:39 compute-0 systemd[1]: Started libpod-conmon-4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad.scope.
Oct 10 10:14:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.480024217 +0000 UTC m=+0.115389052 container init 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.385837903 +0000 UTC m=+0.021202768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.486017739 +0000 UTC m=+0.121382554 container start 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.489059306 +0000 UTC m=+0.124424151 container attach 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:14:39 compute-0 epic_moser[272855]: 167 167
Oct 10 10:14:39 compute-0 systemd[1]: libpod-4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad.scope: Deactivated successfully.
Oct 10 10:14:39 compute-0 conmon[272855]: conmon 4558f0c524331dd59f9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad.scope/container/memory.events
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.492044201 +0000 UTC m=+0.127409016 container died 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-06b145a35e0a86d5a2bfa521c5d5c5d418855a910cb1f5f1b0b4b924b81fa9f1-merged.mount: Deactivated successfully.
Oct 10 10:14:39 compute-0 podman[272839]: 2025-10-10 10:14:39.528805264 +0000 UTC m=+0.164170079 container remove 4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_moser, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:14:39 compute-0 systemd[1]: libpod-conmon-4558f0c524331dd59f9d09e58dce0b0837c51eee0c94f3c0f73559019ee595ad.scope: Deactivated successfully.
Oct 10 10:14:39 compute-0 podman[272878]: 2025-10-10 10:14:39.762455428 +0000 UTC m=+0.078537277 container create b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:14:39 compute-0 systemd[1]: Started libpod-conmon-b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b.scope.
Oct 10 10:14:39 compute-0 podman[272878]: 2025-10-10 10:14:39.732607586 +0000 UTC m=+0.048689485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:39 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a116c0024f6380f587ac6c80b7a25ae74c4b1f60b459aece9ad075b428ac3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a116c0024f6380f587ac6c80b7a25ae74c4b1f60b459aece9ad075b428ac3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a116c0024f6380f587ac6c80b7a25ae74c4b1f60b459aece9ad075b428ac3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a116c0024f6380f587ac6c80b7a25ae74c4b1f60b459aece9ad075b428ac3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:39 compute-0 podman[272878]: 2025-10-10 10:14:39.877030023 +0000 UTC m=+0.193111932 container init b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:14:39 compute-0 podman[272878]: 2025-10-10 10:14:39.887524419 +0000 UTC m=+0.203606228 container start b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:14:39 compute-0 podman[272878]: 2025-10-10 10:14:39.891477354 +0000 UTC m=+0.207559223 container attach b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:14:40 compute-0 ceph-mon[73551]: pgmap v861: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 10 10:14:40 compute-0 lvm[272970]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:14:40 compute-0 lvm[272970]: VG ceph_vg0 finished
Oct 10 10:14:40 compute-0 heuristic_newton[272895]: {}
Oct 10 10:14:40 compute-0 systemd[1]: libpod-b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b.scope: Deactivated successfully.
Oct 10 10:14:40 compute-0 podman[272878]: 2025-10-10 10:14:40.629043415 +0000 UTC m=+0.945125234 container died b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:14:40 compute-0 systemd[1]: libpod-b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b.scope: Consumed 1.166s CPU time.
Oct 10 10:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a116c0024f6380f587ac6c80b7a25ae74c4b1f60b459aece9ad075b428ac3d-merged.mount: Deactivated successfully.
Oct 10 10:14:40 compute-0 podman[272878]: 2025-10-10 10:14:40.667636347 +0000 UTC m=+0.983718156 container remove b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:14:40 compute-0 systemd[1]: libpod-conmon-b813a2b4103352121abb11b2ab7fbc9ed2b66835d954815ac5625dedae89f62b.scope: Deactivated successfully.
Oct 10 10:14:40 compute-0 sudo[272773]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:14:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:14:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:40 compute-0 sudo[272984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:14:40 compute-0 sudo[272984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:40 compute-0 sudo[272984]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Oct 10 10:14:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:41.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:41 compute-0 ceph-mon[73551]: pgmap v862: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Oct 10 10:14:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:41.902 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:41.903 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:41.903 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:42 compute-0 nova_compute[261329]: 2025-10-10 10:14:42.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 917 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:14:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:44 compute-0 ceph-mon[73551]: pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 917 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:14:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:44 compute-0 nova_compute[261329]: 2025-10-10 10:14:44.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:45 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:45.776 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:14:45 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:45.777 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:14:45 compute-0 nova_compute[261329]: 2025-10-10 10:14:45.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:46 compute-0 ceph-mon[73551]: pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:14:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:14:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.062796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287062836, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2117, "num_deletes": 251, "total_data_size": 4115701, "memory_usage": 4171680, "flush_reason": "Manual Compaction"}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 10 10:14:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287087107, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3994523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24741, "largest_seqno": 26857, "table_properties": {"data_size": 3985243, "index_size": 5774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19477, "raw_average_key_size": 20, "raw_value_size": 3966615, "raw_average_value_size": 4119, "num_data_blocks": 254, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091080, "oldest_key_time": 1760091080, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24367 microseconds, and 7894 cpu microseconds.
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.087165) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3994523 bytes OK
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.087189) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.088840) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.088856) EVENT_LOG_v1 {"time_micros": 1760091287088850, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.088877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4107147, prev total WAL file size 4107147, number of live WAL files 2.
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.090406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3900KB)], [56(11MB)]
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287090484, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16447718, "oldest_snapshot_seqno": -1}
Oct 10 10:14:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:47.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:14:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:47.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5811 keys, 14325802 bytes, temperature: kUnknown
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287170148, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14325802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14286605, "index_size": 23599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147674, "raw_average_key_size": 25, "raw_value_size": 14181268, "raw_average_value_size": 2440, "num_data_blocks": 964, "num_entries": 5811, "num_filter_entries": 5811, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.170538) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14325802 bytes
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.171873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.3 rd, 179.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6329, records dropped: 518 output_compression: NoCompression
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.171912) EVENT_LOG_v1 {"time_micros": 1760091287171894, "job": 30, "event": "compaction_finished", "compaction_time_micros": 79743, "compaction_time_cpu_micros": 39009, "output_level": 6, "num_output_files": 1, "total_output_size": 14325802, "num_input_records": 6329, "num_output_records": 5811, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287173609, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287178503, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.090248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.178553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.178560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.178563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.178565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:14:47.178568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:47] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:47] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:47 compute-0 nova_compute[261329]: 2025-10-10 10:14:47.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:48 compute-0 ceph-mon[73551]: pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 10 10:14:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1817080526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2226505712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:49.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:49 compute-0 nova_compute[261329]: 2025-10-10 10:14:49.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:50 compute-0 ceph-mon[73551]: pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 10 10:14:50 compute-0 nova_compute[261329]: 2025-10-10 10:14:50.416 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:50 compute-0 nova_compute[261329]: 2025-10-10 10:14:50.417 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:14:50 compute-0 nova_compute[261329]: 2025-10-10 10:14:50.417 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:14:50 compute-0 nova_compute[261329]: 2025-10-10 10:14:50.430 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:14:50 compute-0 nova_compute[261329]: 2025-10-10 10:14:50.431 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 109 KiB/s wr, 57 op/s
Oct 10 10:14:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:51.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:52 compute-0 ceph-mon[73551]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 109 KiB/s wr, 57 op/s
Oct 10 10:14:52 compute-0 nova_compute[261329]: 2025-10-10 10:14:52.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:52 compute-0 nova_compute[261329]: 2025-10-10 10:14:52.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 109 KiB/s wr, 58 op/s
Oct 10 10:14:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:53.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:53 compute-0 nova_compute[261329]: 2025-10-10 10:14:53.233 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:53 compute-0 nova_compute[261329]: 2025-10-10 10:14:53.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:53 compute-0 nova_compute[261329]: 2025-10-10 10:14:53.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:14:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9509 writes, 36K keys, 9509 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9509 writes, 2350 syncs, 4.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1762 writes, 5246 keys, 1762 commit groups, 1.0 writes per commit group, ingest: 4.67 MB, 0.01 MB/s
                                           Interval WAL: 1762 writes, 786 syncs, 2.24 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:14:54 compute-0 ceph-mon[73551]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 109 KiB/s wr, 58 op/s
Oct 10 10:14:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1407840113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1389124777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.276 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.276 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.276 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.277 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.277 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:14:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:14:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987096323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.741 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.897 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.898 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=59.94269943237305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.898 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.899 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.977 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:14:54 compute-0 nova_compute[261329]: 2025-10-10 10:14:54.978 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.009 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:14:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1987096323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:55.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:14:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773403696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.499 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.504 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.529 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.531 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:14:55 compute-0 nova_compute[261329]: 2025-10-10 10:14:55.532 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:14:55.780 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:14:56 compute-0 ceph-mon[73551]: pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/773403696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4254524373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:56 compute-0 podman[273071]: 2025-10-10 10:14:56.252223279 +0000 UTC m=+0.080048505 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:14:56 compute-0 podman[273069]: 2025-10-10 10:14:56.252187647 +0000 UTC m=+0.090402025 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 10 10:14:56 compute-0 podman[273070]: 2025-10-10 10:14:56.252187677 +0000 UTC m=+0.087880664 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 10:14:56 compute-0 nova_compute[261329]: 2025-10-10 10:14:56.532 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:56 compute-0 nova_compute[261329]: 2025-10-10 10:14:56.532 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:14:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:14:57.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:14:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:14:57] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:14:57 compute-0 nova_compute[261329]: 2025-10-10 10:14:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:58 compute-0 ceph-mon[73551]: pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/688430293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:58 compute-0 sudo[273136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:58 compute-0 sudo[273136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:58 compute-0 sudo[273136]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 10 10:14:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:14:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:14:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:14:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:14:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:59.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:14:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2213807489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:59 compute-0 nova_compute[261329]: 2025-10-10 10:14:59.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:00 compute-0 ceph-mon[73551]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 10 10:15:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:01.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:15:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:02 compute-0 ceph-mon[73551]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:02 compute-0 nova_compute[261329]: 2025-10-10 10:15:02.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:04 compute-0 ceph-mon[73551]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:04 compute-0 nova_compute[261329]: 2025-10-10 10:15:04.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:05.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:06 compute-0 ceph-mon[73551]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:07.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:07] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:07] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:07 compute-0 nova_compute[261329]: 2025-10-10 10:15:07.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:08 compute-0 ceph-mon[73551]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:09 compute-0 podman[273172]: 2025-10-10 10:15:09.220436802 +0000 UTC m=+0.065469491 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:15:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:09 compute-0 nova_compute[261329]: 2025-10-10 10:15:09.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:10 compute-0 ceph-mon[73551]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:12 compute-0 ceph-mon[73551]: pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:12 compute-0 nova_compute[261329]: 2025-10-10 10:15:12.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:13.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:14 compute-0 ceph-mon[73551]: pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:14 compute-0 nova_compute[261329]: 2025-10-10 10:15:14.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:15.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:15 compute-0 ceph-mon[73551]: pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:15:16
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', '.mgr', 'vms']
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:15:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:15:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:15:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:17.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:17.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:17 compute-0 ceph-mon[73551]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:17] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:17] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:17 compute-0 nova_compute[261329]: 2025-10-10 10:15:17.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:18 compute-0 sudo[273201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:18 compute-0 sudo[273201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:18 compute-0 sudo[273201]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:19.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:19 compute-0 nova_compute[261329]: 2025-10-10 10:15:19.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:20 compute-0 ceph-mon[73551]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3825464269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:22 compute-0 ceph-mon[73551]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:22 compute-0 nova_compute[261329]: 2025-10-10 10:15:22.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:23.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:24 compute-0 ceph-mon[73551]: pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:24 compute-0 nova_compute[261329]: 2025-10-10 10:15:24.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:25.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:25.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:26 compute-0 ceph-mon[73551]: pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3004183643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3003324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:15:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:15:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:15:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:15:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:15:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:15:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:27.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:27.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:27 compute-0 podman[273236]: 2025-10-10 10:15:27.233141782 +0000 UTC m=+0.079977493 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:15:27 compute-0 podman[273237]: 2025-10-10 10:15:27.23499265 +0000 UTC m=+0.080432346 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:15:27 compute-0 podman[273235]: 2025-10-10 10:15:27.236544421 +0000 UTC m=+0.086302696 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:15:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:27] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:27] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:15:27 compute-0 nova_compute[261329]: 2025-10-10 10:15:27.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:28 compute-0 ceph-mon[73551]: pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:29.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:29.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:29 compute-0 nova_compute[261329]: 2025-10-10 10:15:29.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:30 compute-0 ceph-mon[73551]: pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:31.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:15:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:32 compute-0 ceph-mon[73551]: pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:32 compute-0 nova_compute[261329]: 2025-10-10 10:15:32.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:15:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:33.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:34 compute-0 ceph-mon[73551]: pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:15:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:34 compute-0 nova_compute[261329]: 2025-10-10 10:15:34.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:35.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:36 compute-0 ceph-mon[73551]: pgmap v889: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:37.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:37.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:15:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:37] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:15:37 compute-0 nova_compute[261329]: 2025-10-10 10:15:37.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:38 compute-0 ceph-mon[73551]: pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:38 compute-0 sudo[273311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:38 compute-0 sudo[273311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:38 compute-0 sudo[273311]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:15:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:39.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:39.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:39 compute-0 nova_compute[261329]: 2025-10-10 10:15:39.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:40 compute-0 podman[273338]: 2025-10-10 10:15:40.1974883 +0000 UTC m=+0.050376718 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:15:40 compute-0 ceph-mon[73551]: pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:15:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 10 10:15:41 compute-0 sudo[273359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:41 compute-0 sudo[273359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-0 sudo[273359]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:41.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:41 compute-0 sudo[273384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:15:41 compute-0 sudo[273384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:41.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:15:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:41 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:15:41 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:41 compute-0 sudo[273384]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:41 compute-0 sudo[273442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:41 compute-0 sudo[273442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-0 sudo[273442]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:41 compute-0 sudo[273467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 10:15:41 compute-0 sudo[273467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:15:41.903 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:15:41.904 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:15:41.904 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.196185845 +0000 UTC m=+0.039521712 container create fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 10:15:42 compute-0 ceph-mon[73551]: pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 10 10:15:42 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:42 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:42 compute-0 systemd[1]: Started libpod-conmon-fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0.scope.
Oct 10 10:15:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.270904479 +0000 UTC m=+0.114240356 container init fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.178758959 +0000 UTC m=+0.022094856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.27750982 +0000 UTC m=+0.120845687 container start fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.280289289 +0000 UTC m=+0.123625156 container attach fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:15:42 compute-0 brave_mayer[273551]: 167 167
Oct 10 10:15:42 compute-0 systemd[1]: libpod-fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0.scope: Deactivated successfully.
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.283224012 +0000 UTC m=+0.126559879 container died fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d5334cc022695f259b09c0c3a85e7dc50cbad23674c80b6ee232b483d1e3278-merged.mount: Deactivated successfully.
Oct 10 10:15:42 compute-0 podman[273534]: 2025-10-10 10:15:42.321262685 +0000 UTC m=+0.164598552 container remove fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:15:42 compute-0 systemd[1]: libpod-conmon-fdaede0d406b3921e402c53edb75dbc3a1ef03bf1559e4d7b82414274c5b78b0.scope: Deactivated successfully.
Oct 10 10:15:42 compute-0 podman[273575]: 2025-10-10 10:15:42.500423701 +0000 UTC m=+0.049737517 container create d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 10:15:42 compute-0 systemd[1]: Started libpod-conmon-d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b.scope.
Oct 10 10:15:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2092323e8fb101af06741f016ba443eac045b23ce08d29e265ade0c7d729/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2092323e8fb101af06741f016ba443eac045b23ce08d29e265ade0c7d729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2092323e8fb101af06741f016ba443eac045b23ce08d29e265ade0c7d729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2092323e8fb101af06741f016ba443eac045b23ce08d29e265ade0c7d729/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:42 compute-0 podman[273575]: 2025-10-10 10:15:42.474174194 +0000 UTC m=+0.023488090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:42 compute-0 podman[273575]: 2025-10-10 10:15:42.575842277 +0000 UTC m=+0.125156113 container init d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:15:42 compute-0 podman[273575]: 2025-10-10 10:15:42.58217217 +0000 UTC m=+0.131485986 container start d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:15:42 compute-0 podman[273575]: 2025-10-10 10:15:42.584890366 +0000 UTC m=+0.134204182 container attach d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:15:42 compute-0 nova_compute[261329]: 2025-10-10 10:15:42.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 10 10:15:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:43.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]: [
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:     {
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "available": false,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "being_replaced": false,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "ceph_device_lvm": false,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "lsm_data": {},
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "lvs": [],
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "path": "/dev/sr0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "rejected_reasons": [
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "Insufficient space (<5GB)",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "Has a FileSystem"
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         ],
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         "sys_api": {
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "actuators": null,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "device_nodes": [
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:                 "sr0"
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             ],
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "devname": "sr0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "human_readable_size": "482.00 KB",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "id_bus": "ata",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "model": "QEMU DVD-ROM",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "nr_requests": "2",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "parent": "/dev/sr0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "partitions": {},
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "path": "/dev/sr0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "removable": "1",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "rev": "2.5+",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "ro": "0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "rotational": "0",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "sas_address": "",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "sas_device_handle": "",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "scheduler_mode": "mq-deadline",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "sectors": 0,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "sectorsize": "2048",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "size": 493568.0,
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "support_discard": "2048",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "type": "disk",
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:             "vendor": "QEMU"
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:         }
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]:     }
Oct 10 10:15:43 compute-0 focused_brahmagupta[273591]: ]
Oct 10 10:15:43 compute-0 systemd[1]: libpod-d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b.scope: Deactivated successfully.
Oct 10 10:15:43 compute-0 podman[273575]: 2025-10-10 10:15:43.336490665 +0000 UTC m=+0.885804501 container died d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ff2092323e8fb101af06741f016ba443eac045b23ce08d29e265ade0c7d729-merged.mount: Deactivated successfully.
Oct 10 10:15:43 compute-0 podman[273575]: 2025-10-10 10:15:43.373081893 +0000 UTC m=+0.922395709 container remove d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:43 compute-0 systemd[1]: libpod-conmon-d285627d302d54a1b8aa9c861c77a524c78cbc6c8f945d85eaf02f43abc2405b.scope: Deactivated successfully.
Oct 10 10:15:43 compute-0 sudo[273467]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:15:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:15:43 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:15:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:15:44 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-0 sudo[275007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:44 compute-0 sudo[275007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:44 compute-0 sudo[275007]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:44 compute-0 sudo[275032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:15:44 compute-0 sudo[275032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:44 compute-0 nova_compute[261329]: 2025-10-10 10:15:44.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.74431609 +0000 UTC m=+0.054489090 container create f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:44 compute-0 systemd[1]: Started libpod-conmon-f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c.scope.
Oct 10 10:15:44 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.722547845 +0000 UTC m=+0.032720825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.818654851 +0000 UTC m=+0.128827851 container init f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.826834562 +0000 UTC m=+0.137007522 container start f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.829959882 +0000 UTC m=+0.140132942 container attach f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:15:44 compute-0 loving_murdock[275113]: 167 167
Oct 10 10:15:44 compute-0 systemd[1]: libpod-f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c.scope: Deactivated successfully.
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.83489742 +0000 UTC m=+0.145070410 container died f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad15493dd7081b6e90c3d776daf1769bb15a4b73a4ec158e8d2efdcc37649cb7-merged.mount: Deactivated successfully.
Oct 10 10:15:44 compute-0 podman[275097]: 2025-10-10 10:15:44.877272501 +0000 UTC m=+0.187445471 container remove f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 10:15:44 compute-0 systemd[1]: libpod-conmon-f24299076187e52030ae02dbe3fcd69aefba13e1aad3e2e538699a3c4d2a838c.scope: Deactivated successfully.
Oct 10 10:15:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.082646953 +0000 UTC m=+0.054404777 container create 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:15:45 compute-0 systemd[1]: Started libpod-conmon-9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d.scope.
Oct 10 10:15:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:45.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.052812562 +0000 UTC m=+0.024570476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:45 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.182134187 +0000 UTC m=+0.153892011 container init 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.198210511 +0000 UTC m=+0.169968335 container start 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.201575278 +0000 UTC m=+0.173333112 container attach 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:15:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:45.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:45 compute-0 thirsty_lamport[275154]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:15:45 compute-0 thirsty_lamport[275154]: --> All data devices are unavailable
Oct 10 10:15:45 compute-0 systemd[1]: libpod-9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d.scope: Deactivated successfully.
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.508133538 +0000 UTC m=+0.479891432 container died 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e14c9eec9c27edc8fec84a4d5ee28fd5515e2ce9b68dda027470ae4d91a472fd-merged.mount: Deactivated successfully.
Oct 10 10:15:45 compute-0 podman[275137]: 2025-10-10 10:15:45.558953849 +0000 UTC m=+0.530711673 container remove 9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:45 compute-0 systemd[1]: libpod-conmon-9269df246c5c23f9d6fe76d0375fe93dee1688fd314d64edfc21c11ce4a9dd6d.scope: Deactivated successfully.
Oct 10 10:15:45 compute-0 sudo[275032]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:45 compute-0 sudo[275183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:45 compute-0 sudo[275183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:45 compute-0 sudo[275183]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:45 compute-0 sudo[275208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:15:45 compute-0 sudo[275208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.186085687 +0000 UTC m=+0.052359871 container create 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:15:46 compute-0 systemd[1]: Started libpod-conmon-234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120.scope.
Oct 10 10:15:46 compute-0 ceph-mon[73551]: pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.164768907 +0000 UTC m=+0.031043111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.269646293 +0000 UTC m=+0.135920517 container init 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.277449602 +0000 UTC m=+0.143723786 container start 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.280634853 +0000 UTC m=+0.146909037 container attach 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:15:46 compute-0 elegant_babbage[275290]: 167 167
Oct 10 10:15:46 compute-0 systemd[1]: libpod-234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120.scope: Deactivated successfully.
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.283412152 +0000 UTC m=+0.149686376 container died 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b67346c5641c988c1e2f187468c4896a00d9007e2b4ed3e02250b17d786633-merged.mount: Deactivated successfully.
Oct 10 10:15:46 compute-0 podman[275274]: 2025-10-10 10:15:46.32002007 +0000 UTC m=+0.186294244 container remove 234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_babbage, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:15:46 compute-0 systemd[1]: libpod-conmon-234d4c8a90413c8cddb1f67d9462136ecc8afb70f387aa8fc391a5ff572d4120.scope: Deactivated successfully.
Oct 10 10:15:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:15:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.536641041 +0000 UTC m=+0.053903371 container create 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:15:46 compute-0 systemd[1]: Started libpod-conmon-962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2.scope.
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.513387979 +0000 UTC m=+0.030650369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d501a2fae7c42de271aeda51933d2930d647ea32204c7e04e8b5899f7a8352d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d501a2fae7c42de271aeda51933d2930d647ea32204c7e04e8b5899f7a8352d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d501a2fae7c42de271aeda51933d2930d647ea32204c7e04e8b5899f7a8352d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d501a2fae7c42de271aeda51933d2930d647ea32204c7e04e8b5899f7a8352d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.642603722 +0000 UTC m=+0.159866072 container init 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.654463721 +0000 UTC m=+0.171726061 container start 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.657467545 +0000 UTC m=+0.174729885 container attach 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:15:46 compute-0 brave_clarke[275331]: {
Oct 10 10:15:46 compute-0 brave_clarke[275331]:     "0": [
Oct 10 10:15:46 compute-0 brave_clarke[275331]:         {
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "devices": [
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "/dev/loop3"
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             ],
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "lv_name": "ceph_lv0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "lv_size": "21470642176",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "name": "ceph_lv0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "tags": {
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.cluster_name": "ceph",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.crush_device_class": "",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.encrypted": "0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.osd_id": "0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.type": "block",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.vdo": "0",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:                 "ceph.with_tpm": "0"
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             },
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "type": "block",
Oct 10 10:15:46 compute-0 brave_clarke[275331]:             "vg_name": "ceph_vg0"
Oct 10 10:15:46 compute-0 brave_clarke[275331]:         }
Oct 10 10:15:46 compute-0 brave_clarke[275331]:     ]
Oct 10 10:15:46 compute-0 brave_clarke[275331]: }
Oct 10 10:15:46 compute-0 systemd[1]: libpod-962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2.scope: Deactivated successfully.
Oct 10 10:15:46 compute-0 podman[275314]: 2025-10-10 10:15:46.981902177 +0000 UTC m=+0.499164517 container died 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:15:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d501a2fae7c42de271aeda51933d2930d647ea32204c7e04e8b5899f7a8352d-merged.mount: Deactivated successfully.
Oct 10 10:15:47 compute-0 podman[275314]: 2025-10-10 10:15:47.026388276 +0000 UTC m=+0.543650616 container remove 962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:47 compute-0 systemd[1]: libpod-conmon-962b9737b5e7d815fedf97116482393c6e36ce9f185c2efab21e52db17d1bde2.scope: Deactivated successfully.
Oct 10 10:15:47 compute-0 sudo[275208]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:47.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:47 compute-0 sudo[275351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:47.161Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:15:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:47.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:47 compute-0 sudo[275351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:47 compute-0 sudo[275351]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:47 compute-0 sudo[275376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:15:47 compute-0 sudo[275376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:47] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Oct 10 10:15:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:47] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.64743695 +0000 UTC m=+0.039396348 container create 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:15:47 compute-0 systemd[1]: Started libpod-conmon-02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f.scope.
Oct 10 10:15:47 compute-0 nova_compute[261329]: 2025-10-10 10:15:47.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.630897002 +0000 UTC m=+0.022856390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.729611362 +0000 UTC m=+0.121570740 container init 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.738818485 +0000 UTC m=+0.130777843 container start 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.741360936 +0000 UTC m=+0.133320314 container attach 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 10:15:47 compute-0 peaceful_borg[275459]: 167 167
Oct 10 10:15:47 compute-0 systemd[1]: libpod-02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f.scope: Deactivated successfully.
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.743066851 +0000 UTC m=+0.135026209 container died 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5aefcb10e4a438acd09c6baf8c318b2d73ee74996c240a95d16c95e78c32955-merged.mount: Deactivated successfully.
Oct 10 10:15:47 compute-0 podman[275443]: 2025-10-10 10:15:47.776229879 +0000 UTC m=+0.168189247 container remove 02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:15:47 compute-0 systemd[1]: libpod-conmon-02ae0a4fe2e70321b5d4879ee07fb04af30199222feff3f97334140728bf985f.scope: Deactivated successfully.
Oct 10 10:15:47 compute-0 podman[275481]: 2025-10-10 10:15:47.928310501 +0000 UTC m=+0.043388346 container create 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:47 compute-0 systemd[1]: Started libpod-conmon-9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89.scope.
Oct 10 10:15:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e357095f5e2e2f398f30faac2dad6d9625be6cded35eafae3a557efc76174f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e357095f5e2e2f398f30faac2dad6d9625be6cded35eafae3a557efc76174f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e357095f5e2e2f398f30faac2dad6d9625be6cded35eafae3a557efc76174f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e357095f5e2e2f398f30faac2dad6d9625be6cded35eafae3a557efc76174f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:47.908384865 +0000 UTC m=+0.023462730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:48.01326185 +0000 UTC m=+0.128339715 container init 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:48.01981766 +0000 UTC m=+0.134895505 container start 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:48.023303031 +0000 UTC m=+0.138380926 container attach 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 10:15:48 compute-0 ceph-mon[73551]: pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:48 compute-0 lvm[275574]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:15:48 compute-0 lvm[275574]: VG ceph_vg0 finished
Oct 10 10:15:48 compute-0 sharp_kepler[275498]: {}
Oct 10 10:15:48 compute-0 systemd[1]: libpod-9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89.scope: Deactivated successfully.
Oct 10 10:15:48 compute-0 systemd[1]: libpod-9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89.scope: Consumed 1.114s CPU time.
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:48.707581012 +0000 UTC m=+0.822658887 container died 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e357095f5e2e2f398f30faac2dad6d9625be6cded35eafae3a557efc76174f7-merged.mount: Deactivated successfully.
Oct 10 10:15:48 compute-0 podman[275481]: 2025-10-10 10:15:48.75798795 +0000 UTC m=+0.873065795 container remove 9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:15:48 compute-0 systemd[1]: libpod-conmon-9e6b16817b5d712e9e576fbe4aa6b7b069d7f5ec13c423083b342a37fdaf8a89.scope: Deactivated successfully.
Oct 10 10:15:48 compute-0 sudo[275376]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:15:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:15:48 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:48 compute-0 sudo[275592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:15:48 compute-0 sudo[275592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:48 compute-0 sudo[275592]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:49.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:49 compute-0 nova_compute[261329]: 2025-10-10 10:15:49.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:49 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:49 compute-0 ceph-mon[73551]: pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:51 compute-0 nova_compute[261329]: 2025-10-10 10:15:51.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:51 compute-0 nova_compute[261329]: 2025-10-10 10:15:51.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:15:51 compute-0 nova_compute[261329]: 2025-10-10 10:15:51.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:15:51 compute-0 nova_compute[261329]: 2025-10-10 10:15:51.258 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:15:52 compute-0 ceph-mon[73551]: pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:52 compute-0 nova_compute[261329]: 2025-10-10 10:15:52.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:52 compute-0 nova_compute[261329]: 2025-10-10 10:15:52.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:52 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:15:52.483 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:15:52 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:15:52.484 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:15:52 compute-0 nova_compute[261329]: 2025-10-10 10:15:52.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:53.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:15:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.233 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.233 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.251 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.251 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.251 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:15:53 compute-0 nova_compute[261329]: 2025-10-10 10:15:53.274 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:15:54 compute-0 ceph-mon[73551]: pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:54 compute-0 nova_compute[261329]: 2025-10-10 10:15:54.260 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:54 compute-0 nova_compute[261329]: 2025-10-10 10:15:54.261 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:54 compute-0 nova_compute[261329]: 2025-10-10 10:15:54.261 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:54 compute-0 nova_compute[261329]: 2025-10-10 10:15:54.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2472353382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:55.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.266 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.267 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.267 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.267 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.268 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:15:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100558740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.718 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.871 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.873 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4564MB free_disk=59.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.874 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:55 compute-0 nova_compute[261329]: 2025-10-10 10:15:55.874 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.038 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.039 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:15:56 compute-0 ceph-mon[73551]: pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/375310620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3100558740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.105 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:15:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422541512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.542 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.547 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.575 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.578 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:15:56 compute-0 nova_compute[261329]: 2025-10-10 10:15:56.579 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/422541512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:15:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:15:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:15:57.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:15:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:57] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Oct 10 10:15:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:15:57] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Oct 10 10:15:57 compute-0 nova_compute[261329]: 2025-10-10 10:15:57.580 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:57 compute-0 nova_compute[261329]: 2025-10-10 10:15:57.580 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:15:57 compute-0 nova_compute[261329]: 2025-10-10 10:15:57.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:58 compute-0 ceph-mon[73551]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:58 compute-0 nova_compute[261329]: 2025-10-10 10:15:58.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:58 compute-0 podman[275670]: 2025-10-10 10:15:58.246100587 +0000 UTC m=+0.074387484 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:15:58 compute-0 podman[275671]: 2025-10-10 10:15:58.249469554 +0000 UTC m=+0.075032444 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:15:58 compute-0 podman[275672]: 2025-10-10 10:15:58.266238889 +0000 UTC m=+0.087349058 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 10:15:58 compute-0 sudo[275732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 10 10:15:59 compute-0 sudo[275732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:59 compute-0 sudo[275732]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:15:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:15:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:15:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:59 compute-0 nova_compute[261329]: 2025-10-10 10:15:59.254 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:59 compute-0 nova_compute[261329]: 2025-10-10 10:15:59.255 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:15:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:59 compute-0 nova_compute[261329]: 2025-10-10 10:15:59.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:00 compute-0 ceph-mon[73551]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 10 10:16:00 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:00.487 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s
Oct 10 10:16:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:16:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-0 ceph-mon[73551]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s
Oct 10 10:16:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/924314656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-0 nova_compute[261329]: 2025-10-10 10:16:02.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 7.3 KiB/s wr, 2 op/s
Oct 10 10:16:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:03.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/369795556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:04 compute-0 ceph-mon[73551]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 7.3 KiB/s wr, 2 op/s
Oct 10 10:16:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.772 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.773 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.788 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.866 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.867 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.873 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:16:04 compute-0 nova_compute[261329]: 2025-10-10 10:16:04.873 2 INFO nova.compute.claims [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Claim successful on node compute-0.ctlplane.example.com
Oct 10 10:16:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.017 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:05.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:05.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:16:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563002844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.538 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.545 2 DEBUG nova.compute.provider_tree [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.562 2 DEBUG nova.scheduler.client.report [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.586 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.587 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.645 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.645 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.676 2 INFO nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.707 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.800 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.802 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.802 2 INFO nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Creating image(s)
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.834 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.874 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.906 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.912 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:05 compute-0 nova_compute[261329]: 2025-10-10 10:16:05.944 2 DEBUG nova.policy [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.006 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.007 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.008 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.008 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.032 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.037 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:06 compute-0 ceph-mon[73551]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1563002844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.310 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.416 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.568 2 DEBUG nova.objects.instance [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid c16efa00-d8c2-4271-81ee-2b14db71ec3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.587 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.588 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Ensure instance console log exists: /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.589 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.589 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.590 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:06 compute-0 nova_compute[261329]: 2025-10-10 10:16:06.963 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Successfully created port: 76963199-0e40-4c51-84ca-2dbe48d96157 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:16:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:07.164Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:16:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:07.164Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:07.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:16:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:07.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:07.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:07] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 10 10:16:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:07] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 10 10:16:07 compute-0 nova_compute[261329]: 2025-10-10 10:16:07.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.025 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Successfully updated port: 76963199-0e40-4c51-84ca-2dbe48d96157 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.050 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.051 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.051 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.145 2 DEBUG nova.compute.manager [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-changed-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.146 2 DEBUG nova.compute.manager [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Refreshing instance network info cache due to event network-changed-76963199-0e40-4c51-84ca-2dbe48d96157. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.147 2 DEBUG oslo_concurrency.lockutils [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:16:08 compute-0 nova_compute[261329]: 2025-10-10 10:16:08.198 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:16:08 compute-0 ceph-mon[73551]: pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.118 2 DEBUG nova.network.neutron [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updating instance_info_cache with network_info: [{"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.139 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.140 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Instance network_info: |[{"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.140 2 DEBUG oslo_concurrency.lockutils [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.141 2 DEBUG nova.network.neutron [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Refreshing network info cache for port 76963199-0e40-4c51-84ca-2dbe48d96157 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.148 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Start _get_guest_xml network_info=[{"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_options': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.153 2 WARNING nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.159 2 DEBUG nova.virt.libvirt.host [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.160 2 DEBUG nova.virt.libvirt.host [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.166 2 DEBUG nova.virt.libvirt.host [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.166 2 DEBUG nova.virt.libvirt.host [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.166 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.167 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.167 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.167 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.167 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.168 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.169 2 DEBUG nova.virt.hardware [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:16:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.171 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:09.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:16:09 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304454409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.608 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.632 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:09 compute-0 nova_compute[261329]: 2025-10-10 10:16:09.635 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:16:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889548273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.086 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.089 2 DEBUG nova.virt.libvirt.vif [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:16:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2113120084',display_name='tempest-TestNetworkBasicOps-server-2113120084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2113120084',id=7,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJrf9KkmlaaRYT+DpqiYxbLmdhilumL9tFSmQIOr40WDGEJap0YHfLpMRrMfauuqDXnv+8RO5/xg47zMyk1KBmOo05RpWMWNZke6+qTM7LF/t8tqCJvXM4gujLaFOy6OWw==',key_name='tempest-TestNetworkBasicOps-602231180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-lvs1wtfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:16:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c16efa00-d8c2-4271-81ee-2b14db71ec3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.090 2 DEBUG nova.network.os_vif_util [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.092 2 DEBUG nova.network.os_vif_util [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.093 2 DEBUG nova.objects.instance [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c16efa00-d8c2-4271-81ee-2b14db71ec3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.109 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <uuid>c16efa00-d8c2-4271-81ee-2b14db71ec3b</uuid>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <name>instance-00000007</name>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <memory>131072</memory>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <vcpu>1</vcpu>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <metadata>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:name>tempest-TestNetworkBasicOps-server-2113120084</nova:name>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:creationTime>2025-10-10 10:16:09</nova:creationTime>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:flavor name="m1.nano">
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:memory>128</nova:memory>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:disk>1</nova:disk>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:swap>0</nova:swap>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </nova:flavor>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:owner>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </nova:owner>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <nova:ports>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <nova:port uuid="76963199-0e40-4c51-84ca-2dbe48d96157">
Oct 10 10:16:10 compute-0 nova_compute[261329]:           <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         </nova:port>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </nova:ports>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </nova:instance>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </metadata>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <sysinfo type="smbios">
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <system>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="serial">c16efa00-d8c2-4271-81ee-2b14db71ec3b</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="uuid">c16efa00-d8c2-4271-81ee-2b14db71ec3b</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </system>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </sysinfo>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <os>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <boot dev="hd"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <smbios mode="sysinfo"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </os>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <features>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <acpi/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <apic/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <vmcoreinfo/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </features>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <clock offset="utc">
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <timer name="hpet" present="no"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </clock>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <cpu mode="host-model" match="exact">
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <disk type="network" device="disk">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk">
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </source>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <target dev="vda" bus="virtio"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <disk type="network" device="cdrom">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config">
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </source>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:16:10 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <target dev="sda" bus="sata"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <interface type="ethernet">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <mac address="fa:16:3e:17:24:d9"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <mtu size="1442"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <target dev="tap76963199-0e"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <serial type="pty">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <log file="/var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/console.log" append="off"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </serial>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <video>
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </video>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <input type="tablet" bus="usb"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <rng model="virtio">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <controller type="usb" index="0"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     <memballoon model="virtio">
Oct 10 10:16:10 compute-0 nova_compute[261329]:       <stats period="10"/>
Oct 10 10:16:10 compute-0 nova_compute[261329]:     </memballoon>
Oct 10 10:16:10 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:16:10 compute-0 nova_compute[261329]: </domain>
Oct 10 10:16:10 compute-0 nova_compute[261329]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.110 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Preparing to wait for external event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.110 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.110 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.111 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.111 2 DEBUG nova.virt.libvirt.vif [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:16:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2113120084',display_name='tempest-TestNetworkBasicOps-server-2113120084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2113120084',id=7,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJrf9KkmlaaRYT+DpqiYxbLmdhilumL9tFSmQIOr40WDGEJap0YHfLpMRrMfauuqDXnv+8RO5/xg47zMyk1KBmOo05RpWMWNZke6+qTM7LF/t8tqCJvXM4gujLaFOy6OWw==',key_name='tempest-TestNetworkBasicOps-602231180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-lvs1wtfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:16:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c16efa00-d8c2-4271-81ee-2b14db71ec3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.111 2 DEBUG nova.network.os_vif_util [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.112 2 DEBUG nova.network.os_vif_util [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.112 2 DEBUG os_vif [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76963199-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.120 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76963199-0e, col_values=(('external_ids', {'iface-id': '76963199-0e40-4c51-84ca-2dbe48d96157', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:24:d9', 'vm-uuid': 'c16efa00-d8c2-4271-81ee-2b14db71ec3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:10 compute-0 NetworkManager[44849]: <info>  [1760091370.1227] manager: (tap76963199-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.130 2 INFO os_vif [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e')
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.177 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.177 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.177 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:17:24:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.178 2 INFO nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Using config drive
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.204 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:10 compute-0 ceph-mon[73551]: pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:16:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1304454409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2889548273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.929 2 DEBUG nova.network.neutron [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updated VIF entry in instance network info cache for port 76963199-0e40-4c51-84ca-2dbe48d96157. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.930 2 DEBUG nova.network.neutron [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updating instance_info_cache with network_info: [{"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:16:10 compute-0 nova_compute[261329]: 2025-10-10 10:16:10.970 2 DEBUG oslo_concurrency.lockutils [req-9d7ab80b-383c-477d-960e-0f7584564814 req-de4805a2-5ccb-420d-af6e-50eed48b2f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:16:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.039 2 INFO nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Creating config drive at /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.050 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8wvj97ku execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.201 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8wvj97ku" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:11 compute-0 podman[276042]: 2025-10-10 10:16:11.212544794 +0000 UTC m=+0.063830860 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.242 2 DEBUG nova.storage.rbd_utils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:16:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.247 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:11.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.423 2 DEBUG oslo_concurrency.processutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config c16efa00-d8c2-4271-81ee-2b14db71ec3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.425 2 INFO nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Deleting local config drive /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b/disk.config because it was imported into RBD.
Oct 10 10:16:11 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:16:11 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 10 10:16:11 compute-0 kernel: tap76963199-0e: entered promiscuous mode
Oct 10 10:16:11 compute-0 ovn_controller[153080]: 2025-10-10T10:16:11Z|00037|binding|INFO|Claiming lport 76963199-0e40-4c51-84ca-2dbe48d96157 for this chassis.
Oct 10 10:16:11 compute-0 ovn_controller[153080]: 2025-10-10T10:16:11Z|00038|binding|INFO|76963199-0e40-4c51-84ca-2dbe48d96157: Claiming fa:16:3e:17:24:d9 10.100.0.18
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.5540] manager: (tap76963199-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.575 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:24:d9 10.100.0.18'], port_security=['fa:16:3e:17:24:d9 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c16efa00-d8c2-4271-81ee-2b14db71ec3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3397827a-8467-4b98-b775-bce578f5aa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=76963199-0e40-4c51-84ca-2dbe48d96157) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.577 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 76963199-0e40-4c51-84ca-2dbe48d96157 in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 bound to our chassis
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.580 162925 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:16:11 compute-0 systemd-machined[215425]: New machine qemu-2-instance-00000007.
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.597 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[afc9177d-8605-4873-a257-38ec09ab5807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.598 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87f6394d-41 in ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.601 269344 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87f6394d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.602 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[4609d00b-3d88-42d0-bf92-08b9318fa684]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.603 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[b415f027-0c8b-4067-ad7c-42b6842a410a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 systemd-udevd[276129]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000007.
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.6230] device (tap76963199-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:16:11 compute-0 ovn_controller[153080]: 2025-10-10T10:16:11Z|00039|binding|INFO|Setting lport 76963199-0e40-4c51-84ca-2dbe48d96157 ovn-installed in OVS
Oct 10 10:16:11 compute-0 ovn_controller[153080]: 2025-10-10T10:16:11Z|00040|binding|INFO|Setting lport 76963199-0e40-4c51-84ca-2dbe48d96157 up in Southbound
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.623 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[12298df2-d2ad-41a1-8219-542e8bc32699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.6265] device (tap76963199-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.653 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[83fe2aee-5573-4f2a-88a3-d1cfd150f27b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.695 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[18fff10b-00bd-4c12-97f0-cc555a1ed4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.7065] manager: (tap87f6394d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.708 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd63a62-aeee-47ff-8d53-159c85780552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 systemd-udevd[276132]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.756 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[cb38568e-b28c-4d96-8b29-eef66cd20649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.762 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a32e5c-d941-4977-99f0-c1778151bb4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.7962] device (tap87f6394d-40): carrier: link connected
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.806 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[5542dd2e-cc01-4aca-8583-bdfc0f7972f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.829 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[73bdb770-fc0d-4c2d-8685-2ca6ce461ed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427536, 'reachable_time': 29788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276163, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.851 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4c8877-4ff9-4901-9afb-ac18fbad5cab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:68a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427536, 'tstamp': 427536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276164, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.872 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[6363e3a3-c232-448a-9f9b-d34801a2f267]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427536, 'reachable_time': 29788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276165, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.914 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[d89552e6-b0cd-49f5-9314-f75534f8ad02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.984 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[9379951a-6c8f-4e4d-83c8-d9b0f75bc468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.985 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.985 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.986 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87f6394d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 NetworkManager[44849]: <info>  [1760091371.9881] manager: (tap87f6394d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 10 10:16:11 compute-0 kernel: tap87f6394d-40: entered promiscuous mode
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.992 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87f6394d-40, col_values=(('external_ids', {'iface-id': '25f0e25b-e08d-4c72-b1cf-e3d546e34451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 ovn_controller[153080]: 2025-10-10T10:16:11Z|00041|binding|INFO|Releasing lport 25f0e25b-e08d-4c72-b1cf-e3d546e34451 from this chassis (sb_readonly=0)
Oct 10 10:16:11 compute-0 nova_compute[261329]: 2025-10-10 10:16:11.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.995 162925 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.996 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[5b359c80-1fc4-4796-b58e-6a9d73b3f82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.997 162925 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: global
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     log         /dev/log local0 debug
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     log-tag     haproxy-metadata-proxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     user        root
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     group       root
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     maxconn     1024
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     pidfile     /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     daemon
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: defaults
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     log global
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     mode http
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     option httplog
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     option dontlognull
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     option http-server-close
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     option forwardfor
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     retries                 3
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     timeout http-request    30s
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     timeout connect         30s
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     timeout client          32s
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     timeout server          32s
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     timeout http-keep-alive 30s
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: listen listener
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     bind 169.254.169.254:80
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:     http-request add-header X-OVN-Network-ID 87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:16:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:11.998 162925 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'env', 'PROCESS_TAG=haproxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87f6394d-4290-4eca-8ba0-18711f3ad6e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.020 2 DEBUG nova.compute.manager [req-3ccc724e-a140-4011-9c02-d5bba5ca4514 req-8297bd18-784a-4cb7-bf5c-f6e5ce9710ca 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.020 2 DEBUG oslo_concurrency.lockutils [req-3ccc724e-a140-4011-9c02-d5bba5ca4514 req-8297bd18-784a-4cb7-bf5c-f6e5ce9710ca 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.020 2 DEBUG oslo_concurrency.lockutils [req-3ccc724e-a140-4011-9c02-d5bba5ca4514 req-8297bd18-784a-4cb7-bf5c-f6e5ce9710ca 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.021 2 DEBUG oslo_concurrency.lockutils [req-3ccc724e-a140-4011-9c02-d5bba5ca4514 req-8297bd18-784a-4cb7-bf5c-f6e5ce9710ca 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.021 2 DEBUG nova.compute.manager [req-3ccc724e-a140-4011-9c02-d5bba5ca4514 req-8297bd18-784a-4cb7-bf5c-f6e5ce9710ca 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Processing event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:16:12 compute-0 ceph-mon[73551]: pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:16:12 compute-0 podman[276240]: 2025-10-10 10:16:12.392936616 +0000 UTC m=+0.061494404 container create abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 10:16:12 compute-0 systemd[1]: Started libpod-conmon-abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c.scope.
Oct 10 10:16:12 compute-0 podman[276240]: 2025-10-10 10:16:12.363154275 +0000 UTC m=+0.031712093 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:16:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffd59f74a4f7987dc864cfc3e125d345d3a74906d8c82929a91decf40f695a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.478 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.479 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091372.477494, c16efa00-d8c2-4271-81ee-2b14db71ec3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.480 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] VM Started (Lifecycle Event)
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.483 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:16:12 compute-0 podman[276240]: 2025-10-10 10:16:12.487289378 +0000 UTC m=+0.155847186 container init abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.487 2 INFO nova.virt.libvirt.driver [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Instance spawned successfully.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.487 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:16:12 compute-0 podman[276240]: 2025-10-10 10:16:12.495767132 +0000 UTC m=+0.164324920 container start abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:16:12 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [NOTICE]   (276260) : New worker (276262) forked
Oct 10 10:16:12 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [NOTICE]   (276260) : Loading success.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.522 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.531 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.532 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.533 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.533 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.534 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.535 2 DEBUG nova.virt.libvirt.driver [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.541 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.578 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.579 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091372.477766, c16efa00-d8c2-4271-81ee-2b14db71ec3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.579 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] VM Paused (Lifecycle Event)
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.609 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.614 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091372.482641, c16efa00-d8c2-4271-81ee-2b14db71ec3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.614 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] VM Resumed (Lifecycle Event)
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.618 2 INFO nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Took 6.82 seconds to spawn the instance on the hypervisor.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.618 2 DEBUG nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.653 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.656 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.695 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.711 2 INFO nova.compute.manager [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Took 7.87 seconds to build instance.
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:12 compute-0 nova_compute[261329]: 2025-10-10 10:16:12.733 2 DEBUG oslo_concurrency.lockutils [None req-bb9dc681-853f-470c-9020-aa6dcc73e738 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 10 10:16:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:13.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.107 2 DEBUG nova.compute.manager [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.108 2 DEBUG oslo_concurrency.lockutils [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.109 2 DEBUG oslo_concurrency.lockutils [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.110 2 DEBUG oslo_concurrency.lockutils [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.110 2 DEBUG nova.compute.manager [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] No waiting events found dispatching network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:16:14 compute-0 nova_compute[261329]: 2025-10-10 10:16:14.110 2 WARNING nova.compute.manager [req-71af56de-68bb-4330-9d2e-04d122f01e36 req-26cf9df3-1ab0-422f-b734-c65c6fe63044 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received unexpected event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 for instance with vm_state active and task_state None.
Oct 10 10:16:14 compute-0 ceph-mon[73551]: pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 10 10:16:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:15 compute-0 nova_compute[261329]: 2025-10-10 10:16:15.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:16 compute-0 ceph-mon[73551]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:16:16
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'images', 'default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:16:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:16:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011088311722667187 of space, bias 1.0, pg target 0.3326493516800156 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:16:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:16:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:17.164Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:16:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:17.165Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:17.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:16:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:17.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:17.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:17 compute-0 nova_compute[261329]: 2025-10-10 10:16:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:18 compute-0 ceph-mon[73551]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 10 10:16:19 compute-0 sudo[276278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:19 compute-0 sudo[276278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:19 compute-0 sudo[276278]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:20 compute-0 nova_compute[261329]: 2025-10-10 10:16:20.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:20 compute-0 ceph-mon[73551]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 10 10:16:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Oct 10 10:16:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:22 compute-0 ceph-mon[73551]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Oct 10 10:16:22 compute-0 nova_compute[261329]: 2025-10-10 10:16:22.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 77 op/s
Oct 10 10:16:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:23.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:23 compute-0 ceph-mon[73551]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 77 op/s
Oct 10 10:16:23 compute-0 ovn_controller[153080]: 2025-10-10T10:16:23Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:24:d9 10.100.0.18
Oct 10 10:16:23 compute-0 ovn_controller[153080]: 2025-10-10T10:16:23Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:24:d9 10.100.0.18
Oct 10 10:16:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:25 compute-0 nova_compute[261329]: 2025-10-10 10:16:25.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:25.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:26 compute-0 ceph-mon[73551]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:16:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:16:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:16:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:16:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:16:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:16:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:27.165Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:16:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:27.165Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:27.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:16:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:27.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:27.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:27 compute-0 nova_compute[261329]: 2025-10-10 10:16:27.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:28 compute-0 ceph-mon[73551]: pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:16:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:29 compute-0 podman[276314]: 2025-10-10 10:16:29.245019231 +0000 UTC m=+0.074474182 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 10 10:16:29 compute-0 podman[276315]: 2025-10-10 10:16:29.25954501 +0000 UTC m=+0.094249920 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 10 10:16:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:29.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:29 compute-0 podman[276316]: 2025-10-10 10:16:29.274464891 +0000 UTC m=+0.102907969 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 10 10:16:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:30 compute-0 ceph-mon[73551]: pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:16:30 compute-0 nova_compute[261329]: 2025-10-10 10:16:30.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:16:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:16:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:31.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:16:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:32 compute-0 ceph-mon[73551]: pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:32 compute-0 nova_compute[261329]: 2025-10-10 10:16:32.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:16:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:34 compute-0 ceph-mon[73551]: pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:16:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:35 compute-0 nova_compute[261329]: 2025-10-10 10:16:35.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:35.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:35.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:36 compute-0 ceph-mon[73551]: pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:37.167Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:37.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:37.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:37] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 10 10:16:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:37] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 10 10:16:37 compute-0 nova_compute[261329]: 2025-10-10 10:16:37.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:38 compute-0 ceph-mon[73551]: pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:39 compute-0 sudo[276390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:39 compute-0 sudo[276390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:39 compute-0 sudo[276390]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:39.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:39.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:40 compute-0 nova_compute[261329]: 2025-10-10 10:16:40.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:40 compute-0 ceph-mon[73551]: pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:16:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:41.905 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:41.906 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:16:41.906 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:42 compute-0 ceph-mon[73551]: pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:16:42 compute-0 podman[276418]: 2025-10-10 10:16:42.214644946 +0000 UTC m=+0.050621594 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:16:42 compute-0 nova_compute[261329]: 2025-10-10 10:16:42.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 10 10:16:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:43.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:44 compute-0 ceph-mon[73551]: pgmap v923: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 10 10:16:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:45 compute-0 nova_compute[261329]: 2025-10-10 10:16:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:45.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:45.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:46 compute-0 ceph-mon[73551]: pgmap v924: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:16:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:16:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:16:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.075 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.105 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Triggering sync for uuid c16efa00-d8c2-4271-81ee-2b14db71ec3b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.106 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.106 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.146 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:47.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:16:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:47] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:47] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:47 compute-0 nova_compute[261329]: 2025-10-10 10:16:47.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:48 compute-0 ceph-mon[73551]: pgmap v925: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 10 10:16:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:16:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:16:49 compute-0 sudo[276444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:16:49 compute-0 sudo[276444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:49 compute-0 sudo[276444]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:49.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:49 compute-0 sudo[276469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:16:49 compute-0 sudo[276469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:49 compute-0 sudo[276469]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:50 compute-0 nova_compute[261329]: 2025-10-10 10:16:50.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:50 compute-0 ceph-mon[73551]: pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 10 10:16:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 2 op/s
Oct 10 10:16:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:51.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:51.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:16:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:16:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:51 compute-0 sudo[276529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:16:51 compute-0 sudo[276529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:51 compute-0 sudo[276529]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:51 compute-0 sudo[276554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:16:51 compute-0 sudo[276554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.268 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.269 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.269 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:16:52 compute-0 ceph-mon[73551]: pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 2 op/s
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:16:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.382678986 +0000 UTC m=+0.051201563 container create 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 10:16:52 compute-0 systemd[1]: Started libpod-conmon-3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf.scope.
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.361087159 +0000 UTC m=+0.029609776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.480389656 +0000 UTC m=+0.148912283 container init 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.488946442 +0000 UTC m=+0.157469009 container start 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.492338951 +0000 UTC m=+0.160861518 container attach 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:16:52 compute-0 quizzical_wiles[276638]: 167 167
Oct 10 10:16:52 compute-0 systemd[1]: libpod-3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf.scope: Deactivated successfully.
Oct 10 10:16:52 compute-0 conmon[276638]: conmon 3dc404b35a8dd099e429 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf.scope/container/memory.events
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.496577628 +0000 UTC m=+0.165100195 container died 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 10:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f0b6580018cd2d7f0897977a014e485deb20e3233456f5d0fd577469fdc9b6-merged.mount: Deactivated successfully.
Oct 10 10:16:52 compute-0 podman[276622]: 2025-10-10 10:16:52.539984837 +0000 UTC m=+0.208507404 container remove 3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:16:52 compute-0 systemd[1]: libpod-conmon-3dc404b35a8dd099e429539e622ed3f76cd20477ed521f4114e51211bb07d2bf.scope: Deactivated successfully.
Oct 10 10:16:52 compute-0 podman[276664]: 2025-10-10 10:16:52.743109368 +0000 UTC m=+0.052849605 container create 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:52 compute-0 systemd[1]: Started libpod-conmon-4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa.scope.
Oct 10 10:16:52 compute-0 podman[276664]: 2025-10-10 10:16:52.724398465 +0000 UTC m=+0.034138732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:52 compute-0 podman[276664]: 2025-10-10 10:16:52.862655772 +0000 UTC m=+0.172396039 container init 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:16:52 compute-0 podman[276664]: 2025-10-10 10:16:52.877274794 +0000 UTC m=+0.187015031 container start 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:16:52 compute-0 podman[276664]: 2025-10-10 10:16:52.881233722 +0000 UTC m=+0.190973979 container attach 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.880 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.881 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquired lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.881 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:16:52 compute-0 nova_compute[261329]: 2025-10-10 10:16:52.881 2 DEBUG nova.objects.instance [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid c16efa00-d8c2-4271-81ee-2b14db71ec3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:16:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 3 op/s
Oct 10 10:16:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:53.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:53 compute-0 zealous_buck[276680]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:16:53 compute-0 zealous_buck[276680]: --> All data devices are unavailable
Oct 10 10:16:53 compute-0 systemd[1]: libpod-4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa.scope: Deactivated successfully.
Oct 10 10:16:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:53 compute-0 podman[276696]: 2025-10-10 10:16:53.32279137 +0000 UTC m=+0.028024665 container died 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-54def23302a96474b1a9dd9e70c81cbd5289939bb02781134185194868a5ff5f-merged.mount: Deactivated successfully.
Oct 10 10:16:53 compute-0 podman[276696]: 2025-10-10 10:16:53.362600293 +0000 UTC m=+0.067833538 container remove 4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:16:53 compute-0 systemd[1]: libpod-conmon-4c46a1bfb03080f40da8bf61dda408b753702f4b6a4aef0d8de9683a6d5faaaa.scope: Deactivated successfully.
Oct 10 10:16:53 compute-0 sudo[276554]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:53 compute-0 sudo[276711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:16:53 compute-0 sudo[276711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:53 compute-0 sudo[276711]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:53 compute-0 sudo[276736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:16:53 compute-0 sudo[276736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:53 compute-0 nova_compute[261329]: 2025-10-10 10:16:53.944 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updating instance_info_cache with network_info: [{"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:16:53 compute-0 nova_compute[261329]: 2025-10-10 10:16:53.965 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Releasing lock "refresh_cache-c16efa00-d8c2-4271-81ee-2b14db71ec3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:16:53 compute-0 nova_compute[261329]: 2025-10-10 10:16:53.965 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.022077848 +0000 UTC m=+0.059461759 container create 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:16:54 compute-0 systemd[1]: Started libpod-conmon-2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437.scope.
Oct 10 10:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:53.99952222 +0000 UTC m=+0.036906161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.10556789 +0000 UTC m=+0.142951801 container init 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.114015002 +0000 UTC m=+0.151398923 container start 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.118156326 +0000 UTC m=+0.155540307 container attach 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:16:54 compute-0 agitated_goldberg[276818]: 167 167
Oct 10 10:16:54 compute-0 systemd[1]: libpod-2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437.scope: Deactivated successfully.
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.119127117 +0000 UTC m=+0.156510998 container died 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6a280a85d8920d271cd33d0c630e52a619fbce28ce5429729be63b9615e55d8-merged.mount: Deactivated successfully.
Oct 10 10:16:54 compute-0 podman[276803]: 2025-10-10 10:16:54.161139742 +0000 UTC m=+0.198523643 container remove 2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:16:54 compute-0 systemd[1]: libpod-conmon-2d5bb9867ce6f51c0b8fa7369a23b4ae4f1c17ada2de4492627c59fe913e9437.scope: Deactivated successfully.
Oct 10 10:16:54 compute-0 nova_compute[261329]: 2025-10-10 10:16:54.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:54 compute-0 nova_compute[261329]: 2025-10-10 10:16:54.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:54 compute-0 nova_compute[261329]: 2025-10-10 10:16:54.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:54 compute-0 ceph-mon[73551]: pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 3 op/s
Oct 10 10:16:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.35800582 +0000 UTC m=+0.050082346 container create f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:16:54 compute-0 systemd[1]: Started libpod-conmon-f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28.scope.
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.336810397 +0000 UTC m=+0.028887013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa83d08ca5db47b9782a824d3b7cac51cb4cb5002c42647e2fbd43787125712c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa83d08ca5db47b9782a824d3b7cac51cb4cb5002c42647e2fbd43787125712c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa83d08ca5db47b9782a824d3b7cac51cb4cb5002c42647e2fbd43787125712c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa83d08ca5db47b9782a824d3b7cac51cb4cb5002c42647e2fbd43787125712c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.463282474 +0000 UTC m=+0.155359050 container init f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.469381481 +0000 UTC m=+0.161458007 container start f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.472429069 +0000 UTC m=+0.164505765 container attach f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:16:54 compute-0 brave_gould[276861]: {
Oct 10 10:16:54 compute-0 brave_gould[276861]:     "0": [
Oct 10 10:16:54 compute-0 brave_gould[276861]:         {
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "devices": [
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "/dev/loop3"
Oct 10 10:16:54 compute-0 brave_gould[276861]:             ],
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "lv_name": "ceph_lv0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "lv_size": "21470642176",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "name": "ceph_lv0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "tags": {
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.cluster_name": "ceph",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.crush_device_class": "",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.encrypted": "0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.osd_id": "0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.type": "block",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.vdo": "0",
Oct 10 10:16:54 compute-0 brave_gould[276861]:                 "ceph.with_tpm": "0"
Oct 10 10:16:54 compute-0 brave_gould[276861]:             },
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "type": "block",
Oct 10 10:16:54 compute-0 brave_gould[276861]:             "vg_name": "ceph_vg0"
Oct 10 10:16:54 compute-0 brave_gould[276861]:         }
Oct 10 10:16:54 compute-0 brave_gould[276861]:     ]
Oct 10 10:16:54 compute-0 brave_gould[276861]: }
Oct 10 10:16:54 compute-0 systemd[1]: libpod-f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28.scope: Deactivated successfully.
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.83715521 +0000 UTC m=+0.529231756 container died f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa83d08ca5db47b9782a824d3b7cac51cb4cb5002c42647e2fbd43787125712c-merged.mount: Deactivated successfully.
Oct 10 10:16:54 compute-0 podman[276845]: 2025-10-10 10:16:54.885357255 +0000 UTC m=+0.577433791 container remove f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:16:54 compute-0 systemd[1]: libpod-conmon-f9b0502df040587d8de799ed1404adb72befcb03a2b0e7cf8139c5f2b5bcda28.scope: Deactivated successfully.
Oct 10 10:16:54 compute-0 sudo[276736]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:55 compute-0 sudo[276883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:16:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:55 compute-0 sudo[276883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:55 compute-0 sudo[276883]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:55 compute-0 sudo[276908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:16:55 compute-0 sudo[276908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:55 compute-0 nova_compute[261329]: 2025-10-10 10:16:55.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:55.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:55 compute-0 nova_compute[261329]: 2025-10-10 10:16:55.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3908480547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.500881232 +0000 UTC m=+0.046382837 container create fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 10:16:55 compute-0 systemd[1]: Started libpod-conmon-fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f.scope.
Oct 10 10:16:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.564350658 +0000 UTC m=+0.109852273 container init fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.570383833 +0000 UTC m=+0.115885428 container start fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.480800304 +0000 UTC m=+0.026301919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.573447202 +0000 UTC m=+0.118948817 container attach fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:16:55 compute-0 nice_chatterjee[276991]: 167 167
Oct 10 10:16:55 compute-0 systemd[1]: libpod-fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f.scope: Deactivated successfully.
Oct 10 10:16:55 compute-0 conmon[276991]: conmon fbc0fe3023f196aabdaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f.scope/container/memory.events
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.576394487 +0000 UTC m=+0.121896082 container died fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad560b272926c7e979e58e89360e442bb9ab36229edde59f08f68254e3997725-merged.mount: Deactivated successfully.
Oct 10 10:16:55 compute-0 podman[276975]: 2025-10-10 10:16:55.608555244 +0000 UTC m=+0.154056839 container remove fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 10:16:55 compute-0 systemd[1]: libpod-conmon-fbc0fe3023f196aabdaf5b99b4aa790103e4c0e66a4c6be617116622de465d4f.scope: Deactivated successfully.
Oct 10 10:16:55 compute-0 podman[277014]: 2025-10-10 10:16:55.778355329 +0000 UTC m=+0.043071530 container create 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:16:55 compute-0 systemd[1]: Started libpod-conmon-38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81.scope.
Oct 10 10:16:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d739a73e1625152e16bcf5d504323a832979b078acf1d66f6bbe5f24f33486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d739a73e1625152e16bcf5d504323a832979b078acf1d66f6bbe5f24f33486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d739a73e1625152e16bcf5d504323a832979b078acf1d66f6bbe5f24f33486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d739a73e1625152e16bcf5d504323a832979b078acf1d66f6bbe5f24f33486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:16:55 compute-0 podman[277014]: 2025-10-10 10:16:55.759170681 +0000 UTC m=+0.023886912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:16:55 compute-0 podman[277014]: 2025-10-10 10:16:55.856084095 +0000 UTC m=+0.120800337 container init 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:16:55 compute-0 podman[277014]: 2025-10-10 10:16:55.866921115 +0000 UTC m=+0.131637346 container start 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:16:55 compute-0 podman[277014]: 2025-10-10 10:16:55.87110554 +0000 UTC m=+0.135821751 container attach 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:16:56 compute-0 nova_compute[261329]: 2025-10-10 10:16:56.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:56 compute-0 nova_compute[261329]: 2025-10-10 10:16:56.239 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:56 compute-0 ceph-mon[73551]: pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2955662946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:56 compute-0 lvm[277106]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:16:56 compute-0 lvm[277106]: VG ceph_vg0 finished
Oct 10 10:16:56 compute-0 nostalgic_mayer[277031]: {}
Oct 10 10:16:56 compute-0 systemd[1]: libpod-38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81.scope: Deactivated successfully.
Oct 10 10:16:56 compute-0 systemd[1]: libpod-38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81.scope: Consumed 1.203s CPU time.
Oct 10 10:16:56 compute-0 podman[277110]: 2025-10-10 10:16:56.673300296 +0000 UTC m=+0.025077389 container died 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1d739a73e1625152e16bcf5d504323a832979b078acf1d66f6bbe5f24f33486-merged.mount: Deactivated successfully.
Oct 10 10:16:56 compute-0 podman[277110]: 2025-10-10 10:16:56.713368309 +0000 UTC m=+0.065145392 container remove 38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Oct 10 10:16:56 compute-0 systemd[1]: libpod-conmon-38bb6316636ded7d965e6d2146faea43c488cc1b0a3f6b9ab8f56e64ad586b81.scope: Deactivated successfully.
Oct 10 10:16:56 compute-0 sudo[276908]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:16:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:16:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:56 compute-0 sudo[277126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:16:56 compute-0 sudo[277126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:56 compute-0 sudo[277126]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:57.169Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:16:57.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:16:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:16:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:57.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.270 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.271 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.271 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.272 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.272 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:16:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:57.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:16:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:57] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:16:57] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:57 compute-0 ceph-mon[73551]: pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:16:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386263607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.857 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.940 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:16:57 compute-0 nova_compute[261329]: 2025-10-10 10:16:57.941 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.141 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.143 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4442MB free_disk=59.89700698852539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.143 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.144 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.419 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Instance c16efa00-d8c2-4271-81ee-2b14db71ec3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.420 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.421 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.450 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing inventories for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.467 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating ProviderTree inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.468 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.485 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing aggregate associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.505 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing trait associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:16:58 compute-0 nova_compute[261329]: 2025-10-10 10:16:58.541 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2386263607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 2 op/s
Oct 10 10:16:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:16:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854478764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:59 compute-0 nova_compute[261329]: 2025-10-10 10:16:59.076 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:59 compute-0 nova_compute[261329]: 2025-10-10 10:16:59.084 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:16:59 compute-0 nova_compute[261329]: 2025-10-10 10:16:59.101 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:16:59 compute-0 nova_compute[261329]: 2025-10-10 10:16:59.125 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:16:59 compute-0 nova_compute[261329]: 2025-10-10 10:16:59.126 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:16:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:16:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:16:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:59 compute-0 sudo[277199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:59 compute-0 sudo[277199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:59 compute-0 sudo[277199]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:59 compute-0 podman[277224]: 2025-10-10 10:16:59.42295419 +0000 UTC m=+0.078809413 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, tcib_managed=true)
Oct 10 10:16:59 compute-0 podman[277223]: 2025-10-10 10:16:59.425889664 +0000 UTC m=+0.084442904 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Oct 10 10:16:59 compute-0 podman[277225]: 2025-10-10 10:16:59.464164738 +0000 UTC m=+0.120777285 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 10 10:16:59 compute-0 ceph-mon[73551]: pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 2 op/s
Oct 10 10:16:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/854478764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:00 compute-0 nova_compute[261329]: 2025-10-10 10:17:00.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:17:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:01.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:01.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:17:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-0 ceph-mon[73551]: pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:17:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/867703427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-0 ovn_controller[153080]: 2025-10-10T10:17:02Z|00042|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 10 10:17:02 compute-0 nova_compute[261329]: 2025-10-10 10:17:02.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.3 KiB/s wr, 2 op/s
Oct 10 10:17:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1641573569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:03.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:04 compute-0 ceph-mon[73551]: pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.3 KiB/s wr, 2 op/s
Oct 10 10:17:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 10 10:17:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 10 10:17:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.390276) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424390452, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 257, "total_data_size": 2826013, "memory_usage": 2865984, "flush_reason": "Manual Compaction"}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424410278, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2743362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26859, "largest_seqno": 28378, "table_properties": {"data_size": 2736402, "index_size": 3967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14641, "raw_average_key_size": 19, "raw_value_size": 2722384, "raw_average_value_size": 3634, "num_data_blocks": 174, "num_entries": 749, "num_filter_entries": 749, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091287, "oldest_key_time": 1760091287, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 20210 microseconds, and 11901 cpu microseconds.
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.410507) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2743362 bytes OK
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.410560) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.411866) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.411879) EVENT_LOG_v1 {"time_micros": 1760091424411875, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.411899) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2819501, prev total WAL file size 2819501, number of live WAL files 2.
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.412996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2679KB)], [59(13MB)]
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424413024, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17069164, "oldest_snapshot_seqno": -1}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6028 keys, 16925124 bytes, temperature: kUnknown
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424504168, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16925124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16881872, "index_size": 27078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153350, "raw_average_key_size": 25, "raw_value_size": 16770307, "raw_average_value_size": 2782, "num_data_blocks": 1111, "num_entries": 6028, "num_filter_entries": 6028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.504631) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16925124 bytes
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.506308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.9 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.4) write-amplify(6.2) OK, records in: 6560, records dropped: 532 output_compression: NoCompression
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.506368) EVENT_LOG_v1 {"time_micros": 1760091424506353, "job": 32, "event": "compaction_finished", "compaction_time_micros": 91342, "compaction_time_cpu_micros": 33126, "output_level": 6, "num_output_files": 1, "total_output_size": 16925124, "num_input_records": 6560, "num_output_records": 6028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424507998, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424514311, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.412953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.514575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.514585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.514589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.514591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:17:04.514594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:05 compute-0 nova_compute[261329]: 2025-10-10 10:17:05.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:05.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:05.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:05 compute-0 ceph-mon[73551]: pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:07.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:07.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:07.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:17:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:17:07 compute-0 nova_compute[261329]: 2025-10-10 10:17:07.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:08 compute-0 ceph-mon[73551]: pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 KiB/s wr, 179 op/s
Oct 10 10:17:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:09.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:10 compute-0 ceph-mon[73551]: pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 KiB/s wr, 179 op/s
Oct 10 10:17:10 compute-0 nova_compute[261329]: 2025-10-10 10:17:10.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 2.0 KiB/s wr, 178 op/s
Oct 10 10:17:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.412 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.413 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.413 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.414 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.414 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.416 2 INFO nova.compute.manager [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Terminating instance
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.418 2 DEBUG nova.compute.manager [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:17:11 compute-0 kernel: tap76963199-0e (unregistering): left promiscuous mode
Oct 10 10:17:11 compute-0 NetworkManager[44849]: <info>  [1760091431.4784] device (tap76963199-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:17:11 compute-0 ovn_controller[153080]: 2025-10-10T10:17:11Z|00043|binding|INFO|Releasing lport 76963199-0e40-4c51-84ca-2dbe48d96157 from this chassis (sb_readonly=0)
Oct 10 10:17:11 compute-0 ovn_controller[153080]: 2025-10-10T10:17:11Z|00044|binding|INFO|Setting lport 76963199-0e40-4c51-84ca-2dbe48d96157 down in Southbound
Oct 10 10:17:11 compute-0 ovn_controller[153080]: 2025-10-10T10:17:11Z|00045|binding|INFO|Removing iface tap76963199-0e ovn-installed in OVS
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.499 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:24:d9 10.100.0.18'], port_security=['fa:16:3e:17:24:d9 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c16efa00-d8c2-4271-81ee-2b14db71ec3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3397827a-8467-4b98-b775-bce578f5aa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=76963199-0e40-4c51-84ca-2dbe48d96157) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.502 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 76963199-0e40-4c51-84ca-2dbe48d96157 in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 unbound from our chassis
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.505 162925 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.507 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[dc95040b-0007-4998-921c-50795a35c625]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.508 162925 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace which is not needed anymore
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 10 10:17:11 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Consumed 14.337s CPU time.
Oct 10 10:17:11 compute-0 systemd-machined[215425]: Machine qemu-2-instance-00000007 terminated.
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.659 2 INFO nova.virt.libvirt.driver [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Instance destroyed successfully.
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.660 2 DEBUG nova.objects.instance [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid c16efa00-d8c2-4271-81ee-2b14db71ec3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.680 2 DEBUG nova.virt.libvirt.vif [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:16:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2113120084',display_name='tempest-TestNetworkBasicOps-server-2113120084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2113120084',id=7,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJrf9KkmlaaRYT+DpqiYxbLmdhilumL9tFSmQIOr40WDGEJap0YHfLpMRrMfauuqDXnv+8RO5/xg47zMyk1KBmOo05RpWMWNZke6+qTM7LF/t8tqCJvXM4gujLaFOy6OWw==',key_name='tempest-TestNetworkBasicOps-602231180',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:16:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-lvs1wtfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:16:12Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c16efa00-d8c2-4271-81ee-2b14db71ec3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.681 2 DEBUG nova.network.os_vif_util [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "76963199-0e40-4c51-84ca-2dbe48d96157", "address": "fa:16:3e:17:24:d9", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76963199-0e", "ovs_interfaceid": "76963199-0e40-4c51-84ca-2dbe48d96157", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.682 2 DEBUG nova.network.os_vif_util [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.684 2 DEBUG os_vif [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76963199-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:11 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [NOTICE]   (276260) : haproxy version is 2.8.14-c23fe91
Oct 10 10:17:11 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [NOTICE]   (276260) : path to executable is /usr/sbin/haproxy
Oct 10 10:17:11 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [WARNING]  (276260) : Exiting Master process...
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [ALERT]    (276260) : Current worker (276262) exited with code 143 (Terminated)
Oct 10 10:17:11 compute-0 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[276256]: [WARNING]  (276260) : All workers exited. Exiting... (0)
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.697 2 INFO os_vif [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:24:d9,bridge_name='br-int',has_traffic_filtering=True,id=76963199-0e40-4c51-84ca-2dbe48d96157,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76963199-0e')
Oct 10 10:17:11 compute-0 systemd[1]: libpod-abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c.scope: Deactivated successfully.
Oct 10 10:17:11 compute-0 podman[277326]: 2025-10-10 10:17:11.700824876 +0000 UTC m=+0.056365968 container died abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 10:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c-userdata-shm.mount: Deactivated successfully.
Oct 10 10:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cffd59f74a4f7987dc864cfc3e125d345d3a74906d8c82929a91decf40f695a8-merged.mount: Deactivated successfully.
Oct 10 10:17:11 compute-0 podman[277326]: 2025-10-10 10:17:11.745643752 +0000 UTC m=+0.101184834 container cleanup abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 10:17:11 compute-0 systemd[1]: libpod-conmon-abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c.scope: Deactivated successfully.
Oct 10 10:17:11 compute-0 podman[277383]: 2025-10-10 10:17:11.824945899 +0000 UTC m=+0.056358078 container remove abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.831 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc3c41e-5c88-4ffe-b9cf-7415b89e5a21]: (4, ('Fri Oct 10 10:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c)\nabeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c\nFri Oct 10 10:17:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (abeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c)\nabeaf9d51854aeef3a269de1a3842a89de9e781d23f396ad46607caec377505c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.834 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[e202d012-13db-4738-b7c0-7ed6571a2e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.835 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 kernel: tap87f6394d-40: left promiscuous mode
Oct 10 10:17:11 compute-0 nova_compute[261329]: 2025-10-10 10:17:11.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.857 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[574c6132-7592-4991-a146-bf5d6b5812df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.887 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[2116f50a-2727-401a-a32f-66dfac358397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.889 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[41bf5747-72c8-4b83-8c7f-8fb1b0ac31a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.905 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[4f331cfc-2788-44ed-abf2-8e389d3263fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427525, 'reachable_time': 18224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277400, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d87f6394d\x2d4290\x2d4eca\x2d8ba0\x2d18711f3ad6e0.mount: Deactivated successfully.
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.909 163038 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:17:11 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:11.910 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[cf81723c-a50f-4c75-8762-34d26c14bd48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.070 2 INFO nova.virt.libvirt.driver [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Deleting instance files /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b_del
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.071 2 INFO nova.virt.libvirt.driver [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Deletion of /var/lib/nova/instances/c16efa00-d8c2-4271-81ee-2b14db71ec3b_del complete
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.158 2 INFO nova.compute.manager [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.159 2 DEBUG oslo.service.loopingcall [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.159 2 DEBUG nova.compute.manager [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.160 2 DEBUG nova.network.neutron [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:17:12 compute-0 ceph-mon[73551]: pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 2.0 KiB/s wr, 178 op/s
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.551 2 DEBUG nova.compute.manager [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-unplugged-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.551 2 DEBUG oslo_concurrency.lockutils [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.552 2 DEBUG oslo_concurrency.lockutils [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.552 2 DEBUG oslo_concurrency.lockutils [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.552 2 DEBUG nova.compute.manager [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] No waiting events found dispatching network-vif-unplugged-76963199-0e40-4c51-84ca-2dbe48d96157 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.552 2 DEBUG nova.compute.manager [req-b3e994a1-7bc0-4d6a-b49e-ef4c36490d1d req-df84e329-cdd3-4e12-b5a7-0cdfe9370928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-unplugged-76963199-0e40-4c51-84ca-2dbe48d96157 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:17:12 compute-0 nova_compute[261329]: 2025-10-10 10:17:12.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 4.2 KiB/s wr, 207 op/s
Oct 10 10:17:13 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:13.207 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:13 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:13.209 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:17:13 compute-0 podman[277404]: 2025-10-10 10:17:13.210713743 +0000 UTC m=+0.056212584 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 10 10:17:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:13.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.472 2 DEBUG nova.network.neutron [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.490 2 INFO nova.compute.manager [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Took 1.33 seconds to deallocate network for instance.
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.528 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.528 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:13 compute-0 nova_compute[261329]: 2025-10-10 10:17:13.585 2 DEBUG oslo_concurrency.processutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478587300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.045 2 DEBUG oslo_concurrency.processutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.052 2 DEBUG nova.compute.provider_tree [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.072 2 DEBUG nova.scheduler.client.report [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.099 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.128 2 INFO nova.scheduler.client.report [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance c16efa00-d8c2-4271-81ee-2b14db71ec3b
Oct 10 10:17:14 compute-0 ceph-mon[73551]: pgmap v938: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 4.2 KiB/s wr, 207 op/s
Oct 10 10:17:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2478587300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.202 2 DEBUG oslo_concurrency.lockutils [None req-5b32922f-9baf-4727-9c4a-919e9c983f82 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.660 2 DEBUG nova.compute.manager [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.661 2 DEBUG oslo_concurrency.lockutils [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.661 2 DEBUG oslo_concurrency.lockutils [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.662 2 DEBUG oslo_concurrency.lockutils [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c16efa00-d8c2-4271-81ee-2b14db71ec3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.662 2 DEBUG nova.compute.manager [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] No waiting events found dispatching network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.662 2 WARNING nova.compute.manager [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received unexpected event network-vif-plugged-76963199-0e40-4c51-84ca-2dbe48d96157 for instance with vm_state deleted and task_state None.
Oct 10 10:17:14 compute-0 nova_compute[261329]: 2025-10-10 10:17:14.663 2 DEBUG nova.compute.manager [req-a38ca135-2102-4443-a83e-b4e568c15a24 req-7e79a675-7a7a-43ec-b755-eb287440dc05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Received event network-vif-deleted-76963199-0e40-4c51-84ca-2dbe48d96157 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:16 compute-0 ceph-mon[73551]: pgmap v939: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:16 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:16.211 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:17:16
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.nfs', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images']
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:17:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:17:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:16 compute-0 nova_compute[261329]: 2025-10-10 10:17:16.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632156752620955 of space, bias 1.0, pg target 0.22896470257862866 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:17:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:17:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:17.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:17.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:17.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:17:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:17:17 compute-0 nova_compute[261329]: 2025-10-10 10:17:17.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-0 nova_compute[261329]: 2025-10-10 10:17:18.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-0 ceph-mon[73551]: pgmap v940: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 5.2 KiB/s wr, 207 op/s
Oct 10 10:17:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:19.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:19.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:19 compute-0 sudo[277452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:19 compute-0 sudo[277452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:19 compute-0 sudo[277452]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:20 compute-0 ceph-mon[73551]: pgmap v941: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 5.2 KiB/s wr, 207 op/s
Oct 10 10:17:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 10 10:17:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:21.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3931265671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:21.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:21 compute-0 nova_compute[261329]: 2025-10-10 10:17:21.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:22 compute-0 ceph-mon[73551]: pgmap v942: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 10 10:17:22 compute-0 nova_compute[261329]: 2025-10-10 10:17:22.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 57 op/s
Oct 10 10:17:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:23.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:24 compute-0 ceph-mon[73551]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 57 op/s
Oct 10 10:17:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:25.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:26 compute-0 ceph-mon[73551]: pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:26 compute-0 nova_compute[261329]: 2025-10-10 10:17:26.657 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091431.6560204, c16efa00-d8c2-4271-81ee-2b14db71ec3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:17:26 compute-0 nova_compute[261329]: 2025-10-10 10:17:26.658 2 INFO nova.compute.manager [-] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] VM Stopped (Lifecycle Event)
Oct 10 10:17:26 compute-0 nova_compute[261329]: 2025-10-10 10:17:26.689 2 DEBUG nova.compute.manager [None req-7311c562-7037-44b0-a8f7-33e2c6293ef2 - - - - - -] [instance: c16efa00-d8c2-4271-81ee-2b14db71ec3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:26 compute-0 nova_compute[261329]: 2025-10-10 10:17:26.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:27.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3051023765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:17:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3051023765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:17:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:27.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:17:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:17:27 compute-0 nova_compute[261329]: 2025-10-10 10:17:27.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:28 compute-0 ceph-mon[73551]: pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:29.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:29.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:30 compute-0 podman[277489]: 2025-10-10 10:17:30.237949487 +0000 UTC m=+0.073889524 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:17:30 compute-0 podman[277490]: 2025-10-10 10:17:30.274338601 +0000 UTC m=+0.109374528 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 10 10:17:30 compute-0 podman[277491]: 2025-10-10 10:17:30.287641189 +0000 UTC m=+0.108044504 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:17:30 compute-0 ceph-mon[73551]: pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:31.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:17:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:31 compute-0 nova_compute[261329]: 2025-10-10 10:17:31.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:32 compute-0 ceph-mon[73551]: pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:32 compute-0 nova_compute[261329]: 2025-10-10 10:17:32.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:33 compute-0 ceph-mon[73551]: pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:35.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:35 compute-0 unix_chkpwd[277564]: password check failed for user (root)
Oct 10 10:17:35 compute-0 sshd-session[277562]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:17:36 compute-0 ceph-mon[73551]: pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:36 compute-0 nova_compute[261329]: 2025-10-10 10:17:36.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:37.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:17:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:37.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:17:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:37.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:37] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Oct 10 10:17:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:37] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Oct 10 10:17:37 compute-0 nova_compute[261329]: 2025-10-10 10:17:37.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:38 compute-0 ceph-mon[73551]: pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:38 compute-0 sshd-session[277562]: Failed password for root from 91.224.92.108 port 28348 ssh2
Oct 10 10:17:38 compute-0 unix_chkpwd[277568]: password check failed for user (root)
Oct 10 10:17:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:39 compute-0 sudo[277570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:39 compute-0 sudo[277570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:39 compute-0 sudo[277570]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:40 compute-0 ceph-mon[73551]: pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:40 compute-0 sshd-session[277562]: Failed password for root from 91.224.92.108 port 28348 ssh2
Oct 10 10:17:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:17:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:17:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:41.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:41 compute-0 unix_chkpwd[277597]: password check failed for user (root)
Oct 10 10:17:41 compute-0 nova_compute[261329]: 2025-10-10 10:17:41.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:41.906 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:41.907 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:41.907 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:42 compute-0 ceph-mon[73551]: pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:42 compute-0 nova_compute[261329]: 2025-10-10 10:17:42.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:43 compute-0 sshd-session[277562]: Failed password for root from 91.224.92.108 port 28348 ssh2
Oct 10 10:17:44 compute-0 ceph-mon[73551]: pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:44 compute-0 podman[277601]: 2025-10-10 10:17:44.236438742 +0000 UTC m=+0.084472585 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:17:44 compute-0 sshd-session[277562]: Received disconnect from 91.224.92.108 port 28348:11:  [preauth]
Oct 10 10:17:44 compute-0 sshd-session[277562]: Disconnected from authenticating user root 91.224.92.108 port 28348 [preauth]
Oct 10 10:17:44 compute-0 sshd-session[277562]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:17:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:45 compute-0 unix_chkpwd[277625]: password check failed for user (root)
Oct 10 10:17:45 compute-0 sshd-session[277622]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.181 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.182 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.200 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.271 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.272 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.280 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.281 2 INFO nova.compute.claims [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Claim successful on node compute-0.ctlplane.example.com
Oct 10 10:17:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.398 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710863530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.836 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.842 2 DEBUG nova.compute.provider_tree [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.861 2 DEBUG nova.scheduler.client.report [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.886 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.888 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.944 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.945 2 DEBUG nova.network.neutron [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.971 2 INFO nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:17:45 compute-0 nova_compute[261329]: 2025-10-10 10:17:45.991 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.097 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.099 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.099 2 INFO nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Creating image(s)
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.128 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.161 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:46 compute-0 ceph-mon[73551]: pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/710863530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.199 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.202 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.236 2 DEBUG nova.policy [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.256 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.257 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.258 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.258 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.284 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.287 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:17:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:17:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.538 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.616 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.745 2 DEBUG nova.objects.instance [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid c1cdb119-d621-43f0-9cde-b0a0da0c0239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.761 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.762 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Ensure instance console log exists: /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.762 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.763 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.763 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:46 compute-0 nova_compute[261329]: 2025-10-10 10:17:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:47.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:17:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:47.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.284 2 DEBUG nova.network.neutron [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Successfully updated port: 864e1646-5abd-4268-a80a-c224425c842d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.300 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.301 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.301 2 DEBUG nova.network.neutron [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:17:47 compute-0 sshd-session[277622]: Failed password for root from 91.224.92.108 port 52486 ssh2
Oct 10 10:17:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:47.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:17:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:47] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.407 2 DEBUG nova.compute.manager [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-changed-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.407 2 DEBUG nova.compute.manager [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Refreshing instance network info cache due to event network-changed-864e1646-5abd-4268-a80a-c224425c842d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.407 2 DEBUG oslo_concurrency.lockutils [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.473 2 DEBUG nova.network.neutron [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:17:47 compute-0 nova_compute[261329]: 2025-10-10 10:17:47.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:47 compute-0 unix_chkpwd[277816]: password check failed for user (root)
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.143 2 DEBUG nova.network.neutron [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updating instance_info_cache with network_info: [{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.165 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.166 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Instance network_info: |[{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.166 2 DEBUG oslo_concurrency.lockutils [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.167 2 DEBUG nova.network.neutron [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Refreshing network info cache for port 864e1646-5abd-4268-a80a-c224425c842d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.174 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Start _get_guest_xml network_info=[{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_options': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.181 2 WARNING nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.185 2 DEBUG nova.virt.libvirt.host [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.186 2 DEBUG nova.virt.libvirt.host [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.197 2 DEBUG nova.virt.libvirt.host [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.198 2 DEBUG nova.virt.libvirt.host [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.198 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.199 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.200 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.200 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.200 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.201 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.201 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.201 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.202 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.202 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.202 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.203 2 DEBUG nova.virt.hardware [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:17:48 compute-0 ceph-mon[73551]: pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.208 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:17:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477895027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.688 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.721 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:48 compute-0 nova_compute[261329]: 2025-10-10 10:17:48.727 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:17:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219895572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.184 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.187 2 DEBUG nova.virt.libvirt.vif [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271154650',display_name='tempest-TestNetworkBasicOps-server-271154650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271154650',id=8,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHElCqDyCQ+82hX9vJ8K1kIG6Yt4k7uqQlCtgDpjVAyd9GFMAUZl401bxv9GULrJf58YsTDnw1NNFBQ9ksOoC9Fo48vf+QVftSyAx+s1pKM02LoH8hpZOMHqdZ0sPl7XZg==',key_name='tempest-TestNetworkBasicOps-1303595731',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-93pmada6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:17:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c1cdb119-d621-43f0-9cde-b0a0da0c0239,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.188 2 DEBUG nova.network.os_vif_util [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.190 2 DEBUG nova.network.os_vif_util [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.192 2 DEBUG nova.objects.instance [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1cdb119-d621-43f0-9cde-b0a0da0c0239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.226 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <uuid>c1cdb119-d621-43f0-9cde-b0a0da0c0239</uuid>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <name>instance-00000008</name>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <memory>131072</memory>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <vcpu>1</vcpu>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <metadata>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:name>tempest-TestNetworkBasicOps-server-271154650</nova:name>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:creationTime>2025-10-10 10:17:48</nova:creationTime>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:flavor name="m1.nano">
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:memory>128</nova:memory>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:disk>1</nova:disk>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:swap>0</nova:swap>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </nova:flavor>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:owner>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </nova:owner>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <nova:ports>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <nova:port uuid="864e1646-5abd-4268-a80a-c224425c842d">
Oct 10 10:17:49 compute-0 nova_compute[261329]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         </nova:port>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </nova:ports>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </nova:instance>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </metadata>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <sysinfo type="smbios">
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <system>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="serial">c1cdb119-d621-43f0-9cde-b0a0da0c0239</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="uuid">c1cdb119-d621-43f0-9cde-b0a0da0c0239</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </system>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </sysinfo>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <os>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <boot dev="hd"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <smbios mode="sysinfo"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </os>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <features>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <acpi/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <apic/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <vmcoreinfo/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </features>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <clock offset="utc">
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <timer name="hpet" present="no"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </clock>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <cpu mode="host-model" match="exact">
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <disk type="network" device="disk">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk">
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </source>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <target dev="vda" bus="virtio"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <disk type="network" device="cdrom">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config">
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </source>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:17:49 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <target dev="sda" bus="sata"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <interface type="ethernet">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <mac address="fa:16:3e:19:de:db"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <mtu size="1442"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <target dev="tap864e1646-5a"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <serial type="pty">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <log file="/var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/console.log" append="off"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </serial>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <video>
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </video>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <input type="tablet" bus="usb"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <rng model="virtio">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <controller type="usb" index="0"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     <memballoon model="virtio">
Oct 10 10:17:49 compute-0 nova_compute[261329]:       <stats period="10"/>
Oct 10 10:17:49 compute-0 nova_compute[261329]:     </memballoon>
Oct 10 10:17:49 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:17:49 compute-0 nova_compute[261329]: </domain>
Oct 10 10:17:49 compute-0 nova_compute[261329]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.228 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Preparing to wait for external event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.228 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.228 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.228 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.229 2 DEBUG nova.virt.libvirt.vif [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271154650',display_name='tempest-TestNetworkBasicOps-server-271154650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271154650',id=8,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHElCqDyCQ+82hX9vJ8K1kIG6Yt4k7uqQlCtgDpjVAyd9GFMAUZl401bxv9GULrJf58YsTDnw1NNFBQ9ksOoC9Fo48vf+QVftSyAx+s1pKM02LoH8hpZOMHqdZ0sPl7XZg==',key_name='tempest-TestNetworkBasicOps-1303595731',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-93pmada6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:17:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c1cdb119-d621-43f0-9cde-b0a0da0c0239,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.229 2 DEBUG nova.network.os_vif_util [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3477895027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.230 2 DEBUG nova.network.os_vif_util [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1219895572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.230 2 DEBUG os_vif [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap864e1646-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap864e1646-5a, col_values=(('external_ids', {'iface-id': '864e1646-5abd-4268-a80a-c224425c842d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:de:db', 'vm-uuid': 'c1cdb119-d621-43f0-9cde-b0a0da0c0239'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:49 compute-0 NetworkManager[44849]: <info>  [1760091469.2377] manager: (tap864e1646-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.244 2 INFO os_vif [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a')
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.264 2 DEBUG nova.network.neutron [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updated VIF entry in instance network info cache for port 864e1646-5abd-4268-a80a-c224425c842d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.266 2 DEBUG nova.network.neutron [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updating instance_info_cache with network_info: [{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.291 2 DEBUG oslo_concurrency.lockutils [req-de669790-51a1-4cd9-a748-bc9d651d38b0 req-7559fda7-7d12-4f7f-9c37-d13f7234fd75 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.317 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.318 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.318 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:19:de:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.318 2 INFO nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Using config drive
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.344 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.632 2 INFO nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Creating config drive at /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.636 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i7_c4l2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.777 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i7_c4l2" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.812 2 DEBUG nova.storage.rbd_utils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.817 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.998 2 DEBUG oslo_concurrency.processutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config c1cdb119-d621-43f0-9cde-b0a0da0c0239_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:49 compute-0 nova_compute[261329]: 2025-10-10 10:17:49.999 2 INFO nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Deleting local config drive /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239/disk.config because it was imported into RBD.
Oct 10 10:17:50 compute-0 kernel: tap864e1646-5a: entered promiscuous mode
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.0588] manager: (tap864e1646-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 10 10:17:50 compute-0 ovn_controller[153080]: 2025-10-10T10:17:50Z|00046|binding|INFO|Claiming lport 864e1646-5abd-4268-a80a-c224425c842d for this chassis.
Oct 10 10:17:50 compute-0 ovn_controller[153080]: 2025-10-10T10:17:50Z|00047|binding|INFO|864e1646-5abd-4268-a80a-c224425c842d: Claiming fa:16:3e:19:de:db 10.100.0.4
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.072 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:de:db 10.100.0.4'], port_security=['fa:16:3e:19:de:db 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1060241160', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1cdb119-d621-43f0-9cde-b0a0da0c0239', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1060241160', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58a83406-32bd-40d9-b3dd-ed56e38abb09, chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=864e1646-5abd-4268-a80a-c224425c842d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.073 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 864e1646-5abd-4268-a80a-c224425c842d in datapath f2187c16-3ad9-4fc6-892a-d36a6262d4d0 bound to our chassis
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.074 162925 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2187c16-3ad9-4fc6-892a-d36a6262d4d0
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.088 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[42154947-45a8-448e-ac27-72dd691ee2f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.089 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf2187c16-31 in ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:17:50 compute-0 systemd-udevd[277953]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.092 269344 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf2187c16-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.093 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b591dc-4b48-4a1e-ab0d-c3a89f2f8a10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.093 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a5da57-e3e2-4df0-a457-fbd9e7f865b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 systemd-machined[215425]: New machine qemu-3-instance-00000008.
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.109 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bebeb7-ea2e-4c95-b592-6eb1ffd828bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.1138] device (tap864e1646-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.1147] device (tap864e1646-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:17:50 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 ovn_controller[153080]: 2025-10-10T10:17:50Z|00048|binding|INFO|Setting lport 864e1646-5abd-4268-a80a-c224425c842d ovn-installed in OVS
Oct 10 10:17:50 compute-0 ovn_controller[153080]: 2025-10-10T10:17:50Z|00049|binding|INFO|Setting lport 864e1646-5abd-4268-a80a-c224425c842d up in Southbound
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.135 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[2d38f6ce-3957-477e-bb06-59200eea5c99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.170 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b1624f-6b9e-4ec1-a9ee-9239dc337dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 systemd-udevd[277959]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.1761] manager: (tapf2187c16-30): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.175 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[2f123be8-3137-4929-a113-c7bc5ab05c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.210 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[33d24b29-b646-4370-bddb-07a1054d863d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.212 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[389e966c-1813-4129-bcbf-d31e5faa8c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ceph-mon[73551]: pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.2376] device (tapf2187c16-30): carrier: link connected
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.246 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[4fdf49f2-011b-4816-b79e-107f93a1d41a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.263 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8c2f08-6b33-460c-8a96-72bb84b8e737]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2187c16-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:33:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437380, 'reachable_time': 30630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277988, 'error': None, 'target': 'ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.280 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[85f2b885-796b-487b-a733-852bc8279371]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:3311'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437380, 'tstamp': 437380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277989, 'error': None, 'target': 'ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.298 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[98825059-06f6-4863-bf60-0187eb019162]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2187c16-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:33:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437380, 'reachable_time': 30630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277990, 'error': None, 'target': 'ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.329 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6c03ef-aa90-41d1-a979-7aaaa1db3583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.394 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[35677596-8a8d-4047-869a-b39415d4d324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.395 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2187c16-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.395 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.396 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2187c16-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:50 compute-0 NetworkManager[44849]: <info>  [1760091470.3982] manager: (tapf2187c16-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 10 10:17:50 compute-0 kernel: tapf2187c16-30: entered promiscuous mode
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.400 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2187c16-30, col_values=(('external_ids', {'iface-id': 'e9f075b6-37df-4f28-90c0-0fcdd3460568'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 ovn_controller[153080]: 2025-10-10T10:17:50Z|00050|binding|INFO|Releasing lport e9f075b6-37df-4f28-90c0-0fcdd3460568 from this chassis (sb_readonly=0)
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.415 162925 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f2187c16-3ad9-4fc6-892a-d36a6262d4d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f2187c16-3ad9-4fc6-892a-d36a6262d4d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.416 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8c3678-5ea4-429d-9c76-b1f043b76be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.417 162925 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: global
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     log         /dev/log local0 debug
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     log-tag     haproxy-metadata-proxy-f2187c16-3ad9-4fc6-892a-d36a6262d4d0
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     user        root
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     group       root
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     maxconn     1024
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     pidfile     /var/lib/neutron/external/pids/f2187c16-3ad9-4fc6-892a-d36a6262d4d0.pid.haproxy
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     daemon
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: defaults
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     log global
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     mode http
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     option httplog
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     option dontlognull
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     option http-server-close
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     option forwardfor
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     retries                 3
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     timeout http-request    30s
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     timeout connect         30s
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     timeout client          32s
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     timeout server          32s
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     timeout http-keep-alive 30s
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: listen listener
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     bind 169.254.169.254:80
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:     http-request add-header X-OVN-Network-ID f2187c16-3ad9-4fc6-892a-d36a6262d4d0
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:17:50 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:50.419 162925 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'env', 'PROCESS_TAG=haproxy-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f2187c16-3ad9-4fc6-892a-d36a6262d4d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:17:50 compute-0 sshd-session[277622]: Failed password for root from 91.224.92.108 port 52486 ssh2
Oct 10 10:17:50 compute-0 unix_chkpwd[278077]: password check failed for user (root)
Oct 10 10:17:50 compute-0 podman[278064]: 2025-10-10 10:17:50.809429408 +0000 UTC m=+0.055921204 container create 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:17:50 compute-0 systemd[1]: Started libpod-conmon-33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a.scope.
Oct 10 10:17:50 compute-0 podman[278064]: 2025-10-10 10:17:50.782643924 +0000 UTC m=+0.029135740 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:17:50 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f56f17379ffce28902a60441cf56a932a02a6b284a9471a4db2a94c909c3af49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.905 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091470.904581, c1cdb119-d621-43f0-9cde-b0a0da0c0239 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.905 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] VM Started (Lifecycle Event)
Oct 10 10:17:50 compute-0 podman[278064]: 2025-10-10 10:17:50.916688056 +0000 UTC m=+0.163179942 container init 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:17:50 compute-0 podman[278064]: 2025-10-10 10:17:50.927555437 +0000 UTC m=+0.174047263 container start 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.929 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.935 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091470.904717, c1cdb119-d621-43f0-9cde-b0a0da0c0239 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.936 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] VM Paused (Lifecycle Event)
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.960 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:50 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [NOTICE]   (278085) : New worker (278087) forked
Oct 10 10:17:50 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [NOTICE]   (278085) : Loading success.
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.965 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:17:50 compute-0 nova_compute[261329]: 2025-10-10 10:17:50.992 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:17:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.086 2 DEBUG nova.compute.manager [req-d8258531-948b-4f9f-b538-32d7e6bbeffd req-af68a470-363b-448b-8bd3-5237e1df236e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.086 2 DEBUG oslo_concurrency.lockutils [req-d8258531-948b-4f9f-b538-32d7e6bbeffd req-af68a470-363b-448b-8bd3-5237e1df236e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.087 2 DEBUG oslo_concurrency.lockutils [req-d8258531-948b-4f9f-b538-32d7e6bbeffd req-af68a470-363b-448b-8bd3-5237e1df236e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.087 2 DEBUG oslo_concurrency.lockutils [req-d8258531-948b-4f9f-b538-32d7e6bbeffd req-af68a470-363b-448b-8bd3-5237e1df236e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.088 2 DEBUG nova.compute.manager [req-d8258531-948b-4f9f-b538-32d7e6bbeffd req-af68a470-363b-448b-8bd3-5237e1df236e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Processing event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.089 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.093 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091471.0933654, c1cdb119-d621-43f0-9cde-b0a0da0c0239 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.094 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] VM Resumed (Lifecycle Event)
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.097 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.101 2 INFO nova.virt.libvirt.driver [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Instance spawned successfully.
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.102 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.118 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.128 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.133 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.133 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.134 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.135 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.135 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.136 2 DEBUG nova.virt.libvirt.driver [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.168 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:17:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.342 2 INFO nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Took 5.24 seconds to spawn the instance on the hypervisor.
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.343 2 DEBUG nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.416 2 INFO nova.compute.manager [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Took 6.17 seconds to build instance.
Oct 10 10:17:51 compute-0 nova_compute[261329]: 2025-10-10 10:17:51.437 2 DEBUG oslo_concurrency.lockutils [None req-3fdde17e-21e1-4cfb-81b2-e6ce89cb090e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:52 compute-0 ceph-mon[73551]: pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:52 compute-0 sshd-session[277622]: Failed password for root from 91.224.92.108 port 52486 ssh2
Oct 10 10:17:52 compute-0 nova_compute[261329]: 2025-10-10 10:17:52.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.182 2 DEBUG nova.compute.manager [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.182 2 DEBUG oslo_concurrency.lockutils [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.183 2 DEBUG oslo_concurrency.lockutils [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.183 2 DEBUG oslo_concurrency.lockutils [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.183 2 DEBUG nova.compute.manager [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] No waiting events found dispatching network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:53 compute-0 nova_compute[261329]: 2025-10-10 10:17:53.183 2 WARNING nova.compute.manager [req-72e40ea8-a7df-4f8c-ba6d-f807b0ff4cee req-286ae978-5221-4fd8-9c66-6d5836d25e72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received unexpected event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d for instance with vm_state active and task_state None.
Oct 10 10:17:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:17:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:53.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:17:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:53 compute-0 sshd-session[277622]: Received disconnect from 91.224.92.108 port 52486:11:  [preauth]
Oct 10 10:17:53 compute-0 sshd-session[277622]: Disconnected from authenticating user root 91.224.92.108 port 52486 [preauth]
Oct 10 10:17:53 compute-0 sshd-session[277622]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:17:54 compute-0 nova_compute[261329]: 2025-10-10 10:17:54.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:54 compute-0 ceph-mon[73551]: pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:54 compute-0 unix_chkpwd[278101]: password check failed for user (root)
Oct 10 10:17:54 compute-0 sshd-session[278098]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:17:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:55 compute-0 ovn_controller[153080]: 2025-10-10T10:17:55Z|00051|binding|INFO|Releasing lport e9f075b6-37df-4f28-90c0-0fcdd3460568 from this chassis (sb_readonly=0)
Oct 10 10:17:55 compute-0 NetworkManager[44849]: <info>  [1760091475.9376] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 10 10:17:55 compute-0 NetworkManager[44849]: <info>  [1760091475.9388] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 10 10:17:55 compute-0 nova_compute[261329]: 2025-10-10 10:17:55.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:55 compute-0 ovn_controller[153080]: 2025-10-10T10:17:55Z|00052|binding|INFO|Releasing lport e9f075b6-37df-4f28-90c0-0fcdd3460568 from this chassis (sb_readonly=0)
Oct 10 10:17:55 compute-0 nova_compute[261329]: 2025-10-10 10:17:55.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:55 compute-0 nova_compute[261329]: 2025-10-10 10:17:55.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.127 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.127 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.127 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.249 2 DEBUG nova.compute.manager [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-changed-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.250 2 DEBUG nova.compute.manager [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Refreshing instance network info cache due to event network-changed-864e1646-5abd-4268-a80a-c224425c842d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.250 2 DEBUG oslo_concurrency.lockutils [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.250 2 DEBUG oslo_concurrency.lockutils [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.250 2 DEBUG nova.network.neutron [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Refreshing network info cache for port 864e1646-5abd-4268-a80a-c224425c842d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:17:56 compute-0 ceph-mon[73551]: pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1465875043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3077656643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.449 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.472 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.473 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.473 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.473 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.473 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.474 2 INFO nova.compute.manager [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Terminating instance
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.475 2 DEBUG nova.compute.manager [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:17:56 compute-0 kernel: tap864e1646-5a (unregistering): left promiscuous mode
Oct 10 10:17:56 compute-0 NetworkManager[44849]: <info>  [1760091476.5220] device (tap864e1646-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:17:56 compute-0 sshd-session[278098]: Failed password for root from 91.224.92.108 port 10970 ssh2
Oct 10 10:17:56 compute-0 ovn_controller[153080]: 2025-10-10T10:17:56Z|00053|binding|INFO|Releasing lport 864e1646-5abd-4268-a80a-c224425c842d from this chassis (sb_readonly=0)
Oct 10 10:17:56 compute-0 ovn_controller[153080]: 2025-10-10T10:17:56Z|00054|binding|INFO|Setting lport 864e1646-5abd-4268-a80a-c224425c842d down in Southbound
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 ovn_controller[153080]: 2025-10-10T10:17:56Z|00055|binding|INFO|Removing iface tap864e1646-5a ovn-installed in OVS
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.540 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:de:db 10.100.0.4'], port_security=['fa:16:3e:19:de:db 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1060241160', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1cdb119-d621-43f0-9cde-b0a0da0c0239', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1060241160', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58a83406-32bd-40d9-b3dd-ed56e38abb09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=864e1646-5abd-4268-a80a-c224425c842d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.542 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 864e1646-5abd-4268-a80a-c224425c842d in datapath f2187c16-3ad9-4fc6-892a-d36a6262d4d0 unbound from our chassis
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.543 162925 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2187c16-3ad9-4fc6-892a-d36a6262d4d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.545 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[c40552f7-b241-4b6b-b13e-49301cdc313d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.546 162925 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0 namespace which is not needed anymore
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 10 10:17:56 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 6.284s CPU time.
Oct 10 10:17:56 compute-0 systemd-machined[215425]: Machine qemu-3-instance-00000008 terminated.
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [NOTICE]   (278085) : haproxy version is 2.8.14-c23fe91
Oct 10 10:17:56 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [NOTICE]   (278085) : path to executable is /usr/sbin/haproxy
Oct 10 10:17:56 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [ALERT]    (278085) : Current worker (278087) exited with code 143 (Terminated)
Oct 10 10:17:56 compute-0 neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0[278081]: [WARNING]  (278085) : All workers exited. Exiting... (0)
Oct 10 10:17:56 compute-0 systemd[1]: libpod-33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a.scope: Deactivated successfully.
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 podman[278128]: 2025-10-10 10:17:56.705072333 +0000 UTC m=+0.050388636 container died 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.709 2 INFO nova.virt.libvirt.driver [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Instance destroyed successfully.
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.709 2 DEBUG nova.objects.instance [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid c1cdb119-d621-43f0-9cde-b0a0da0c0239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.724 2 DEBUG nova.virt.libvirt.vif [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271154650',display_name='tempest-TestNetworkBasicOps-server-271154650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271154650',id=8,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHElCqDyCQ+82hX9vJ8K1kIG6Yt4k7uqQlCtgDpjVAyd9GFMAUZl401bxv9GULrJf58YsTDnw1NNFBQ9ksOoC9Fo48vf+QVftSyAx+s1pKM02LoH8hpZOMHqdZ0sPl7XZg==',key_name='tempest-TestNetworkBasicOps-1303595731',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:17:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-93pmada6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:17:51Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=c1cdb119-d621-43f0-9cde-b0a0da0c0239,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.725 2 DEBUG nova.network.os_vif_util [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.726 2 DEBUG nova.network.os_vif_util [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.726 2 DEBUG os_vif [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap864e1646-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a-userdata-shm.mount: Deactivated successfully.
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.733 2 INFO os_vif [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:de:db,bridge_name='br-int',has_traffic_filtering=True,id=864e1646-5abd-4268-a80a-c224425c842d,network=Network(f2187c16-3ad9-4fc6-892a-d36a6262d4d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap864e1646-5a')
Oct 10 10:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f56f17379ffce28902a60441cf56a932a02a6b284a9471a4db2a94c909c3af49-merged.mount: Deactivated successfully.
Oct 10 10:17:56 compute-0 podman[278128]: 2025-10-10 10:17:56.750758756 +0000 UTC m=+0.096075059 container cleanup 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:17:56 compute-0 systemd[1]: libpod-conmon-33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a.scope: Deactivated successfully.
Oct 10 10:17:56 compute-0 podman[278185]: 2025-10-10 10:17:56.807114793 +0000 UTC m=+0.035854997 container remove 33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.821 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[e68de3d3-fdb7-4b37-82a2-c47e1949647b]: (4, ('Fri Oct 10 10:17:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0 (33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a)\n33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a\nFri Oct 10 10:17:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0 (33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a)\n33b7754a1a826a3ed8375e8ca4df29d4488a2eb97ef2303b342867c82b674b5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.823 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[c41265ba-1cf8-4174-b568-0a43b8efcd82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.824 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2187c16-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:56 compute-0 kernel: tapf2187c16-30: left promiscuous mode
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.832 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[a384c2bf-93d4-4178-ab53-b56db749fc34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 nova_compute[261329]: 2025-10-10 10:17:56.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.876 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[509b519d-a42b-4ce4-886a-280552d32e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.877 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[3e64fd86-cc48-47ad-9a2a-1d63128cc375]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.898 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccf302b-b94f-4893-a34d-4f1e99c1361e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437373, 'reachable_time': 26033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278204, 'error': None, 'target': 'ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.901 163038 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f2187c16-3ad9-4fc6-892a-d36a6262d4d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:17:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:17:56.901 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[442fe14f-5757-482e-bc33-08ba20ee3a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:56 compute-0 systemd[1]: run-netns-ovnmeta\x2df2187c16\x2d3ad9\x2d4fc6\x2d892a\x2dd36a6262d4d0.mount: Deactivated successfully.
Oct 10 10:17:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.144 2 INFO nova.virt.libvirt.driver [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Deleting instance files /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239_del
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.145 2 INFO nova.virt.libvirt.driver [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Deletion of /var/lib/nova/instances/c1cdb119-d621-43f0-9cde-b0a0da0c0239_del complete
Oct 10 10:17:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:57.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:17:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:17:57.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:17:57 compute-0 sudo[278206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:17:57 compute-0 sudo[278206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:57 compute-0 sudo[278206]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.204 2 INFO nova.compute.manager [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.204 2 DEBUG oslo.service.loopingcall [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.204 2 DEBUG nova.compute.manager [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.204 2 DEBUG nova.network.neutron [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:17:57 compute-0 unix_chkpwd[278233]: password check failed for user (root)
Oct 10 10:17:57 compute-0 sudo[278231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:17:57 compute-0 sudo[278231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:57.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:17:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:17:57] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:17:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:17:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:17:57 compute-0 sudo[278231]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:57 compute-0 nova_compute[261329]: 2025-10-10 10:17:57.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:17:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:17:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:58 compute-0 sudo[278288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:17:58 compute-0 sudo[278288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:58 compute-0 sudo[278288]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:58 compute-0 sudo[278313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:17:58 compute-0 sudo[278313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:58 compute-0 ceph-mon[73551]: pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:17:58 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.344 2 DEBUG nova.network.neutron [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updated VIF entry in instance network info cache for port 864e1646-5abd-4268-a80a-c224425c842d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.345 2 DEBUG nova.network.neutron [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updating instance_info_cache with network_info: [{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.371 2 DEBUG oslo_concurrency.lockutils [req-1926e433-1a43-43a3-80b7-6003aa7a88eb req-1767b527-a6b9-446d-b1f5-b2ad52378584 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.372 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquired lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.373 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.373 2 DEBUG nova.objects.instance [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lazy-loading 'info_cache' on Instance uuid c1cdb119-d621-43f0-9cde-b0a0da0c0239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.375 2 DEBUG nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-vif-unplugged-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.375 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.375 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.375 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.375 2 DEBUG nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] No waiting events found dispatching network-vif-unplugged-864e1646-5abd-4268-a80a-c224425c842d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-vif-unplugged-864e1646-5abd-4268-a80a-c224425c842d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG oslo_concurrency.lockutils [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 DEBUG nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] No waiting events found dispatching network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.376 2 WARNING nova.compute.manager [req-336519c2-4266-4e92-9a7f-0bfc517a7546 req-0b657dfb-432a-4816-be98-4d0095277961 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Received unexpected event network-vif-plugged-864e1646-5abd-4268-a80a-c224425c842d for instance with vm_state active and task_state deleting.
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.518208727 +0000 UTC m=+0.043149692 container create c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:17:58 compute-0 systemd[1]: Started libpod-conmon-c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6.scope.
Oct 10 10:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.501735146 +0000 UTC m=+0.026676091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.59736053 +0000 UTC m=+0.122301525 container init c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.606930358 +0000 UTC m=+0.131871313 container start c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.610556295 +0000 UTC m=+0.135497280 container attach c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 10:17:58 compute-0 angry_shockley[278393]: 167 167
Oct 10 10:17:58 compute-0 systemd[1]: libpod-c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6.scope: Deactivated successfully.
Oct 10 10:17:58 compute-0 conmon[278393]: conmon c4bed579fb754d22d1ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6.scope/container/memory.events
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.613966235 +0000 UTC m=+0.138907170 container died c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd324dccdc214bab3930f1757157be667e94042efa4c038516536625fb527a4-merged.mount: Deactivated successfully.
Oct 10 10:17:58 compute-0 podman[278377]: 2025-10-10 10:17:58.649533012 +0000 UTC m=+0.174473957 container remove c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_shockley, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:17:58 compute-0 systemd[1]: libpod-conmon-c4bed579fb754d22d1eef0d06f15c02264b328083b3c478db5b19f9d521f79c6.scope: Deactivated successfully.
Oct 10 10:17:58 compute-0 podman[278417]: 2025-10-10 10:17:58.821791707 +0000 UTC m=+0.049144737 container create b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:17:58 compute-0 systemd[1]: Started libpod-conmon-b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b.scope.
Oct 10 10:17:58 compute-0 podman[278417]: 2025-10-10 10:17:58.804693525 +0000 UTC m=+0.032046585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:17:58 compute-0 podman[278417]: 2025-10-10 10:17:58.934836931 +0000 UTC m=+0.162189971 container init b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 10:17:58 compute-0 podman[278417]: 2025-10-10 10:17:58.941965682 +0000 UTC m=+0.169318732 container start b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:17:58 compute-0 podman[278417]: 2025-10-10 10:17:58.945815075 +0000 UTC m=+0.173168135 container attach b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.951 2 DEBUG nova.network.neutron [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:58 compute-0 nova_compute[261329]: 2025-10-10 10:17:58.971 2 INFO nova.compute.manager [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Took 1.77 seconds to deallocate network for instance.
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.025 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.026 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.077 2 DEBUG oslo_concurrency.processutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:59 compute-0 sshd-session[278098]: Failed password for root from 91.224.92.108 port 10970 ssh2
Oct 10 10:17:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:59 compute-0 dazzling_leavitt[278434]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:17:59 compute-0 dazzling_leavitt[278434]: --> All data devices are unavailable
Oct 10 10:17:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.003000097s ======
Oct 10 10:17:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000097s
Oct 10 10:17:59 compute-0 systemd[1]: libpod-b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b.scope: Deactivated successfully.
Oct 10 10:17:59 compute-0 podman[278417]: 2025-10-10 10:17:59.327290366 +0000 UTC m=+0.554643406 container died b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bde384602bb05ec82fb326352787a6a442ddee9aee40554ce6c16b5fa0acf6e-merged.mount: Deactivated successfully.
Oct 10 10:17:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:59 compute-0 podman[278417]: 2025-10-10 10:17:59.372556416 +0000 UTC m=+0.599909456 container remove b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 10:17:59 compute-0 systemd[1]: libpod-conmon-b019d41b7f968f13dd2c79056faf1501c5124590a4379860a66d9d30d7f24e7b.scope: Deactivated successfully.
Oct 10 10:17:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:17:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:59.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:59 compute-0 sudo[278313]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:59 compute-0 sudo[278482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:17:59 compute-0 sudo[278482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:59 compute-0 sudo[278482]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.497 2 DEBUG nova.network.neutron [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updating instance_info_cache with network_info: [{"id": "864e1646-5abd-4268-a80a-c224425c842d", "address": "fa:16:3e:19:de:db", "network": {"id": "f2187c16-3ad9-4fc6-892a-d36a6262d4d0", "bridge": "br-int", "label": "tempest-network-smoke--807297116", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap864e1646-5a", "ovs_interfaceid": "864e1646-5abd-4268-a80a-c224425c842d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.516 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Releasing lock "refresh_cache-c1cdb119-d621-43f0-9cde-b0a0da0c0239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.517 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:17:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872784269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.517 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.517 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 sudo[278507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.518 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:59 compute-0 sudo[278507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.536 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.538 2 DEBUG oslo_concurrency.processutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.543 2 DEBUG nova.compute.provider_tree [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.557 2 DEBUG nova.scheduler.client.report [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.581 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.585 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.585 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.586 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.586 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:59 compute-0 sudo[278534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:59 compute-0 sudo[278534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:59 compute-0 sudo[278534]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.631 2 INFO nova.scheduler.client.report [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance c1cdb119-d621-43f0-9cde-b0a0da0c0239
Oct 10 10:17:59 compute-0 nova_compute[261329]: 2025-10-10 10:17:59.692 2 DEBUG oslo_concurrency.lockutils [None req-a2f94b0f-a04a-401a-90ae-00a1da026837 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "c1cdb119-d621-43f0-9cde-b0a0da0c0239" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:59 compute-0 podman[278618]: 2025-10-10 10:17:59.919801042 +0000 UTC m=+0.042289845 container create 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 10:17:59 compute-0 systemd[1]: Started libpod-conmon-99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215.scope.
Oct 10 10:17:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:17:59 compute-0 podman[278618]: 2025-10-10 10:17:59.899738265 +0000 UTC m=+0.022227088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:17:59 compute-0 podman[278618]: 2025-10-10 10:17:59.996024469 +0000 UTC m=+0.118513302 container init 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 10:18:00 compute-0 podman[278618]: 2025-10-10 10:18:00.002198328 +0000 UTC m=+0.124687131 container start 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:18:00 compute-0 podman[278618]: 2025-10-10 10:18:00.004999669 +0000 UTC m=+0.127488512 container attach 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:18:00 compute-0 fervent_chandrasekhar[278634]: 167 167
Oct 10 10:18:00 compute-0 systemd[1]: libpod-99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215.scope: Deactivated successfully.
Oct 10 10:18:00 compute-0 podman[278618]: 2025-10-10 10:18:00.007954504 +0000 UTC m=+0.130443317 container died 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:18:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277334980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-0 unix_chkpwd[278648]: password check failed for user (root)
Oct 10 10:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f71b528d298b1a39022d7acf39538ac835dcd0df64b7433225a6714c9e4b5b0-merged.mount: Deactivated successfully.
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.047 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:00 compute-0 podman[278618]: 2025-10-10 10:18:00.059535748 +0000 UTC m=+0.182024551 container remove 99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:18:00 compute-0 systemd[1]: libpod-conmon-99cb258a70dce320f2e148cca9ec449000ea3c282a01a2637f48fa9d4facf215.scope: Deactivated successfully.
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.224684543 +0000 UTC m=+0.047075900 container create a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.233 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.235 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4546MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.235 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.235 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:00 compute-0 systemd[1]: Started libpod-conmon-a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4.scope.
Oct 10 10:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f242cfa34207bbfb666dd2f53e0d7c5da2c03e122bc28e4760d412ea424418/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f242cfa34207bbfb666dd2f53e0d7c5da2c03e122bc28e4760d412ea424418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f242cfa34207bbfb666dd2f53e0d7c5da2c03e122bc28e4760d412ea424418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f242cfa34207bbfb666dd2f53e0d7c5da2c03e122bc28e4760d412ea424418/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.20289431 +0000 UTC m=+0.025285697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.305060994 +0000 UTC m=+0.127452401 container init a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.306 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.306 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.312619068 +0000 UTC m=+0.135010425 container start a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.316380529 +0000 UTC m=+0.138771906 container attach a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.324 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:00 compute-0 ceph-mon[73551]: pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 10 10:18:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/872784269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3277334980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-0 podman[278680]: 2025-10-10 10:18:00.355512561 +0000 UTC m=+0.072225799 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:00 compute-0 podman[278688]: 2025-10-10 10:18:00.363064294 +0000 UTC m=+0.059982584 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:00 compute-0 podman[278721]: 2025-10-10 10:18:00.472180343 +0000 UTC m=+0.086292983 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:18:00 compute-0 unruffled_cori[278679]: {
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:     "0": [
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:         {
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "devices": [
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "/dev/loop3"
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             ],
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "lv_name": "ceph_lv0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "lv_size": "21470642176",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "name": "ceph_lv0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "tags": {
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.cluster_name": "ceph",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.crush_device_class": "",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.encrypted": "0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.osd_id": "0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.type": "block",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.vdo": "0",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:                 "ceph.with_tpm": "0"
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             },
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "type": "block",
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:             "vg_name": "ceph_vg0"
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:         }
Oct 10 10:18:00 compute-0 unruffled_cori[278679]:     ]
Oct 10 10:18:00 compute-0 unruffled_cori[278679]: }
Oct 10 10:18:00 compute-0 systemd[1]: libpod-a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4.scope: Deactivated successfully.
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.60977474 +0000 UTC m=+0.432166137 container died a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f242cfa34207bbfb666dd2f53e0d7c5da2c03e122bc28e4760d412ea424418-merged.mount: Deactivated successfully.
Oct 10 10:18:00 compute-0 podman[278662]: 2025-10-10 10:18:00.674857129 +0000 UTC m=+0.497248526 container remove a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:00 compute-0 systemd[1]: libpod-conmon-a498cc070d9afcb429ee6f52ea63d75813fad0387a790b63eb1749c03398ddc4.scope: Deactivated successfully.
Oct 10 10:18:00 compute-0 sudo[278507]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477850028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.778 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.785 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.803 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:18:00 compute-0 sudo[278782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:18:00 compute-0 sudo[278782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:00 compute-0 sudo[278782]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.839 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.841 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:00 compute-0 sudo[278810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:18:00 compute-0 sudo[278810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.947 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:00 compute-0 nova_compute[261329]: 2025-10-10 10:18:00.948 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.312706096 +0000 UTC m=+0.042912905 container create a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:18:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:18:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:01 compute-0 systemd[1]: Started libpod-conmon-a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506.scope.
Oct 10 10:18:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3477850028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.296747312 +0000 UTC m=+0.026954151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:18:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:01.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.405127006 +0000 UTC m=+0.135333915 container init a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.41394315 +0000 UTC m=+0.144149999 container start a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.417826585 +0000 UTC m=+0.148033434 container attach a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:18:01 compute-0 vigilant_rosalind[278893]: 167 167
Oct 10 10:18:01 compute-0 systemd[1]: libpod-a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506.scope: Deactivated successfully.
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.42170413 +0000 UTC m=+0.151910979 container died a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 10:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-79868ed56347f7f6f46218e8c42b39c518aa1cd89586016ee6240dafb527be74-merged.mount: Deactivated successfully.
Oct 10 10:18:01 compute-0 podman[278877]: 2025-10-10 10:18:01.46944196 +0000 UTC m=+0.199648779 container remove a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:18:01 compute-0 systemd[1]: libpod-conmon-a51cb22426145810c255b2216d621116cc91c03a7bb1ecaaadd60da0aa002506.scope: Deactivated successfully.
Oct 10 10:18:01 compute-0 podman[278916]: 2025-10-10 10:18:01.708741687 +0000 UTC m=+0.069262216 container create ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:18:01 compute-0 nova_compute[261329]: 2025-10-10 10:18:01.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:01 compute-0 systemd[1]: Started libpod-conmon-ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e.scope.
Oct 10 10:18:01 compute-0 podman[278916]: 2025-10-10 10:18:01.681537979 +0000 UTC m=+0.042058578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:18:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4655d04c3c3192c12bfc643b7bf3c9d36943db43a3b659f7723d4102170d212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4655d04c3c3192c12bfc643b7bf3c9d36943db43a3b659f7723d4102170d212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4655d04c3c3192c12bfc643b7bf3c9d36943db43a3b659f7723d4102170d212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4655d04c3c3192c12bfc643b7bf3c9d36943db43a3b659f7723d4102170d212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:01 compute-0 podman[278916]: 2025-10-10 10:18:01.824210639 +0000 UTC m=+0.184731168 container init ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:18:01 compute-0 podman[278916]: 2025-10-10 10:18:01.834290375 +0000 UTC m=+0.194810914 container start ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:18:01 compute-0 podman[278916]: 2025-10-10 10:18:01.838037335 +0000 UTC m=+0.198557894 container attach ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:18:02 compute-0 sshd-session[278098]: Failed password for root from 91.224.92.108 port 10970 ssh2
Oct 10 10:18:02 compute-0 ceph-mon[73551]: pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:02 compute-0 lvm[279008]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:18:02 compute-0 lvm[279008]: VG ceph_vg0 finished
Oct 10 10:18:02 compute-0 distracted_noether[278932]: {}
Oct 10 10:18:02 compute-0 systemd[1]: libpod-ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e.scope: Deactivated successfully.
Oct 10 10:18:02 compute-0 podman[278916]: 2025-10-10 10:18:02.597168703 +0000 UTC m=+0.957689242 container died ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:18:02 compute-0 systemd[1]: libpod-ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e.scope: Consumed 1.192s CPU time.
Oct 10 10:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4655d04c3c3192c12bfc643b7bf3c9d36943db43a3b659f7723d4102170d212-merged.mount: Deactivated successfully.
Oct 10 10:18:02 compute-0 podman[278916]: 2025-10-10 10:18:02.646480624 +0000 UTC m=+1.007001163 container remove ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noether, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 10:18:02 compute-0 systemd[1]: libpod-conmon-ec979bd038bd44d33a8458e8ad31c0e90e049cc4c4985a8a88c1ac0c8b2e335e.scope: Deactivated successfully.
Oct 10 10:18:02 compute-0 sudo[278810]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:18:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:18:02 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:02 compute-0 sudo[279022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:18:02 compute-0 nova_compute[261329]: 2025-10-10 10:18:02.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:02 compute-0 sudo[279022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:02 compute-0 sudo[279022]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:02 compute-0 sshd-session[278098]: Received disconnect from 91.224.92.108 port 10970:11:  [preauth]
Oct 10 10:18:02 compute-0 sshd-session[278098]: Disconnected from authenticating user root 91.224.92.108 port 10970 [preauth]
Oct 10 10:18:02 compute-0 sshd-session[278098]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 10 10:18:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3550810255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:04 compute-0 ceph-mon[73551]: pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3370266523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:05 compute-0 ceph-mon[73551]: pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:05.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:06 compute-0 nova_compute[261329]: 2025-10-10 10:18:06.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:18:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:18:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:07.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:18:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:07] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:18:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:07] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Oct 10 10:18:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:07 compute-0 nova_compute[261329]: 2025-10-10 10:18:07.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:08 compute-0 ceph-mon[73551]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 10 10:18:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2863660861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:09.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:10 compute-0 ceph-mon[73551]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 10 10:18:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.163194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491163234, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1105, "num_deletes": 501, "total_data_size": 1301790, "memory_usage": 1335160, "flush_reason": "Manual Compaction"}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491174502, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 897365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28379, "largest_seqno": 29483, "table_properties": {"data_size": 893126, "index_size": 1379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13960, "raw_average_key_size": 19, "raw_value_size": 882191, "raw_average_value_size": 1232, "num_data_blocks": 61, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091425, "oldest_key_time": 1760091425, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 11372 microseconds, and 6147 cpu microseconds.
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.174561) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 897365 bytes OK
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.174591) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177367) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177389) EVENT_LOG_v1 {"time_micros": 1760091491177381, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177413) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1295678, prev total WAL file size 1295678, number of live WAL files 2.
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.178397) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(876KB)], [62(16MB)]
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491178465, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17822489, "oldest_snapshot_seqno": -1}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5751 keys, 12048603 bytes, temperature: kUnknown
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491243669, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12048603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12012571, "index_size": 20562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148719, "raw_average_key_size": 25, "raw_value_size": 11911101, "raw_average_value_size": 2071, "num_data_blocks": 824, "num_entries": 5751, "num_filter_entries": 5751, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.244011) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12048603 bytes
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.245492) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.8 rd, 184.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(33.3) write-amplify(13.4) OK, records in: 6744, records dropped: 993 output_compression: NoCompression
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.245520) EVENT_LOG_v1 {"time_micros": 1760091491245507, "job": 34, "event": "compaction_finished", "compaction_time_micros": 65324, "compaction_time_cpu_micros": 26957, "output_level": 6, "num_output_files": 1, "total_output_size": 12048603, "num_input_records": 6744, "num_output_records": 5751, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491246179, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491251884, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.178278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:11.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:11 compute-0 nova_compute[261329]: 2025-10-10 10:18:11.708 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091476.706246, c1cdb119-d621-43f0-9cde-b0a0da0c0239 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:18:11 compute-0 nova_compute[261329]: 2025-10-10 10:18:11.708 2 INFO nova.compute.manager [-] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] VM Stopped (Lifecycle Event)
Oct 10 10:18:11 compute-0 nova_compute[261329]: 2025-10-10 10:18:11.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:11 compute-0 nova_compute[261329]: 2025-10-10 10:18:11.748 2 DEBUG nova.compute.manager [None req-8a373c5a-d452-4361-a7ca-9f8740983119 - - - - - -] [instance: c1cdb119-d621-43f0-9cde-b0a0da0c0239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:18:12 compute-0 ceph-mon[73551]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:12 compute-0 nova_compute[261329]: 2025-10-10 10:18:12.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:14 compute-0 ceph-mon[73551]: pgmap v968: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2249811561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1035205217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:15 compute-0 podman[279060]: 2025-10-10 10:18:15.271159831 +0000 UTC m=+0.109611410 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:15.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:16 compute-0 ceph-mon[73551]: pgmap v969: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:18:16
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log']
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:18:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:18:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:16 compute-0 nova_compute[261329]: 2025-10-10 10:18:16.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:18:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:18:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:17.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:18:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:17.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:18:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:17.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:18:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:17.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:18:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:18:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:17.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:17 compute-0 nova_compute[261329]: 2025-10-10 10:18:17.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:18 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:18.024 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:18:18 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:18.025 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:18:18 compute-0 nova_compute[261329]: 2025-10-10 10:18:18.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:18 compute-0 ceph-mon[73551]: pgmap v970: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:18 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 10:18:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:18:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:19.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:19.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:19 compute-0 sudo[279083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:19 compute-0 sudo[279083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:19 compute-0 sudo[279083]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:20 compute-0 ceph-mon[73551]: pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:18:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 10 10:18:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:21.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:21.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:21 compute-0 nova_compute[261329]: 2025-10-10 10:18:21.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:22 compute-0 ceph-mon[73551]: pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 10 10:18:22 compute-0 nova_compute[261329]: 2025-10-10 10:18:22.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:18:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:23.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:23.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:24 compute-0 ceph-mon[73551]: pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:18:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:25 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:25.027 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:25.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:26 compute-0 ceph-mon[73551]: pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1600136722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:26 compute-0 nova_compute[261329]: 2025-10-10 10:18:26.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:27.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:18:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:27.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:18:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1965479803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:18:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1965479803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:18:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:18:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:18:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:27.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:27 compute-0 nova_compute[261329]: 2025-10-10 10:18:27.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:28 compute-0 ceph-mon[73551]: pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:29.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:30 compute-0 ceph-mon[73551]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:31 compute-0 podman[279122]: 2025-10-10 10:18:31.22109961 +0000 UTC m=+0.067416673 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:18:31 compute-0 podman[279123]: 2025-10-10 10:18:31.261709695 +0000 UTC m=+0.088897458 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 10 10:18:31 compute-0 podman[279124]: 2025-10-10 10:18:31.279505114 +0000 UTC m=+0.112164901 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:18:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:31.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:18:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:31 compute-0 ceph-mon[73551]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:31 compute-0 nova_compute[261329]: 2025-10-10 10:18:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:32 compute-0 nova_compute[261329]: 2025-10-10 10:18:32.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:32 compute-0 nova_compute[261329]: 2025-10-10 10:18:32.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:32 compute-0 nova_compute[261329]: 2025-10-10 10:18:32.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:33.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:33.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:34 compute-0 ceph-mon[73551]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=cleanup t=2025-10-10T10:18:34.718681828Z level=info msg="Completed cleanup jobs" duration=7.871741ms
Oct 10 10:18:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugins.update.checker t=2025-10-10T10:18:34.849041139Z level=info msg="Update check succeeded" duration=48.030113ms
Oct 10 10:18:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana.update.checker t=2025-10-10T10:18:34.88447487Z level=info msg="Update check succeeded" duration=48.660212ms
Oct 10 10:18:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:35.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:36 compute-0 ceph-mon[73551]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:36 compute-0 nova_compute[261329]: 2025-10-10 10:18:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:37.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:18:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:37] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:18:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:37] "GET /metrics HTTP/1.1" 200 48385 "" "Prometheus/2.51.0"
Oct 10 10:18:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:37.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:37 compute-0 nova_compute[261329]: 2025-10-10 10:18:37.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:38 compute-0 ceph-mon[73551]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:39.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:39 compute-0 sudo[279193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:39 compute-0 sudo[279193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:39 compute-0 sudo[279193]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:40 compute-0 ceph-mon[73551]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:41.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:41.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:41 compute-0 nova_compute[261329]: 2025-10-10 10:18:41.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:41.907 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:41.907 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:41.907 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:42 compute-0 ceph-mon[73551]: pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:42 compute-0 nova_compute[261329]: 2025-10-10 10:18:42.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:18:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:44 compute-0 ceph-mon[73551]: pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:18:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:44 compute-0 nova_compute[261329]: 2025-10-10 10:18:44.990 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:44 compute-0 nova_compute[261329]: 2025-10-10 10:18:44.990 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.021 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:18:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.126 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.126 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.135 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.135 2 INFO nova.compute.claims [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Claim successful on node compute-0.ctlplane.example.com
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.318 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:45.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:45.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655689645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.780 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.787 2 DEBUG nova.compute.provider_tree [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.815 2 DEBUG nova.scheduler.client.report [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.867 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.867 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.945 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.945 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.968 2 INFO nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:18:45 compute-0 nova_compute[261329]: 2025-10-10 10:18:45.992 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.143 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.145 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.145 2 INFO nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Creating image(s)
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.182 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:46 compute-0 podman[279247]: 2025-10-10 10:18:46.210478098 +0000 UTC m=+0.058601611 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.215 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:46 compute-0 ceph-mon[73551]: pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3655689645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.247 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.251 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.274 2 DEBUG nova.policy [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.316 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.317 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.318 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.318 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:18:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.347 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.351 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 18cfecd8-3017-4bde-906c-6b7784a3d544_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:18:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.738 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 18cfecd8-3017-4bde-906c-6b7784a3d544_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.846 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:18:46 compute-0 nova_compute[261329]: 2025-10-10 10:18:46.990 2 DEBUG nova.objects.instance [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 18cfecd8-3017-4bde-906c-6b7784a3d544 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.017 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.018 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Ensure instance console log exists: /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.019 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.019 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.020 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:47.185Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:18:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:47.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:18:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:47] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:18:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:47] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.402 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Successfully created port: ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:18:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:47.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:47 compute-0 nova_compute[261329]: 2025-10-10 10:18:47.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:48 compute-0 ceph-mon[73551]: pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.184 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Successfully updated port: ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.205 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.205 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.206 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.298 2 DEBUG nova.compute.manager [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.299 2 DEBUG nova.compute.manager [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing instance network info cache due to event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.299 2 DEBUG oslo_concurrency.lockutils [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:18:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:49.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:49 compute-0 nova_compute[261329]: 2025-10-10 10:18:49.647 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:18:50 compute-0 ceph-mon[73551]: pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:51.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:51 compute-0 nova_compute[261329]: 2025-10-10 10:18:51.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:52 compute-0 ceph-mon[73551]: pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:52 compute-0 nova_compute[261329]: 2025-10-10 10:18:52.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:53.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.544 2 DEBUG nova.network.neutron [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updating instance_info_cache with network_info: [{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.571 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.571 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Instance network_info: |[{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.572 2 DEBUG oslo_concurrency.lockutils [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.572 2 DEBUG nova.network.neutron [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.574 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Start _get_guest_xml network_info=[{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_options': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.579 2 WARNING nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.586 2 DEBUG nova.virt.libvirt.host [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.587 2 DEBUG nova.virt.libvirt.host [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.594 2 DEBUG nova.virt.libvirt.host [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.594 2 DEBUG nova.virt.libvirt.host [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.595 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.595 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.595 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.596 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.596 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.596 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.596 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.596 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.597 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.597 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.597 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.597 2 DEBUG nova.virt.hardware [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:18:53 compute-0 nova_compute[261329]: 2025-10-10 10:18:53.600 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:18:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434918906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.054 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.086 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.090 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.267 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.267 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:18:54 compute-0 ceph-mon[73551]: pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/434918906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:18:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376492106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.575 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.577 2 DEBUG nova.virt.libvirt.vif [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-376763456',display_name='tempest-TestNetworkBasicOps-server-376763456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-376763456',id=10,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMl05Q2kd4BnjEfpYur+319mg1yfy+wiVVa4alZXTZtYpVK6CwqSnYm5UoWAZUverDsKq1NJbLqzumWnGU1ynqtdWGl+B5lKE95q4mdEJoro52IKXn5aBncuPkGARRZT9g==',key_name='tempest-TestNetworkBasicOps-2037591979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-5tcdlcdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:18:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=18cfecd8-3017-4bde-906c-6b7784a3d544,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.578 2 DEBUG nova.network.os_vif_util [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.579 2 DEBUG nova.network.os_vif_util [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.580 2 DEBUG nova.objects.instance [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18cfecd8-3017-4bde-906c-6b7784a3d544 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.600 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <uuid>18cfecd8-3017-4bde-906c-6b7784a3d544</uuid>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <name>instance-0000000a</name>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <memory>131072</memory>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <vcpu>1</vcpu>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <metadata>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:name>tempest-TestNetworkBasicOps-server-376763456</nova:name>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:creationTime>2025-10-10 10:18:53</nova:creationTime>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:flavor name="m1.nano">
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:memory>128</nova:memory>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:disk>1</nova:disk>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:swap>0</nova:swap>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </nova:flavor>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:owner>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </nova:owner>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <nova:ports>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <nova:port uuid="ec07d396-e4e9-4e94-a3ef-9957f5b321d0">
Oct 10 10:18:54 compute-0 nova_compute[261329]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         </nova:port>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </nova:ports>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </nova:instance>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </metadata>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <sysinfo type="smbios">
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <system>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="serial">18cfecd8-3017-4bde-906c-6b7784a3d544</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="uuid">18cfecd8-3017-4bde-906c-6b7784a3d544</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </system>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </sysinfo>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <os>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <boot dev="hd"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <smbios mode="sysinfo"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </os>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <features>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <acpi/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <apic/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <vmcoreinfo/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </features>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <clock offset="utc">
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <timer name="hpet" present="no"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </clock>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <cpu mode="host-model" match="exact">
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <disk type="network" device="disk">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/18cfecd8-3017-4bde-906c-6b7784a3d544_disk">
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </source>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <target dev="vda" bus="virtio"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <disk type="network" device="cdrom">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config">
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </source>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:18:54 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <target dev="sda" bus="sata"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <interface type="ethernet">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <mac address="fa:16:3e:9a:c3:f8"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <mtu size="1442"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <target dev="tapec07d396-e4"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <serial type="pty">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <log file="/var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/console.log" append="off"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </serial>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <video>
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </video>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <input type="tablet" bus="usb"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <rng model="virtio">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <controller type="usb" index="0"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     <memballoon model="virtio">
Oct 10 10:18:54 compute-0 nova_compute[261329]:       <stats period="10"/>
Oct 10 10:18:54 compute-0 nova_compute[261329]:     </memballoon>
Oct 10 10:18:54 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:18:54 compute-0 nova_compute[261329]: </domain>
Oct 10 10:18:54 compute-0 nova_compute[261329]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.602 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Preparing to wait for external event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.602 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.602 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.602 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.603 2 DEBUG nova.virt.libvirt.vif [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-376763456',display_name='tempest-TestNetworkBasicOps-server-376763456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-376763456',id=10,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMl05Q2kd4BnjEfpYur+319mg1yfy+wiVVa4alZXTZtYpVK6CwqSnYm5UoWAZUverDsKq1NJbLqzumWnGU1ynqtdWGl+B5lKE95q4mdEJoro52IKXn5aBncuPkGARRZT9g==',key_name='tempest-TestNetworkBasicOps-2037591979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-5tcdlcdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:18:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=18cfecd8-3017-4bde-906c-6b7784a3d544,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.603 2 DEBUG nova.network.os_vif_util [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.604 2 DEBUG nova.network.os_vif_util [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.604 2 DEBUG os_vif [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec07d396-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec07d396-e4, col_values=(('external_ids', {'iface-id': 'ec07d396-e4e9-4e94-a3ef-9957f5b321d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:c3:f8', 'vm-uuid': '18cfecd8-3017-4bde-906c-6b7784a3d544'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-0 NetworkManager[44849]: <info>  [1760091534.6130] manager: (tapec07d396-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.619 2 INFO os_vif [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4')
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.673 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.674 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.674 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:9a:c3:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.674 2 INFO nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Using config drive
Oct 10 10:18:54 compute-0 nova_compute[261329]: 2025-10-10 10:18:54.700 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3376492106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:18:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:55.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.362 2 INFO nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Creating config drive at /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.367 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2j0lu8a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:55.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.487 2 DEBUG nova.network.neutron [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updated VIF entry in instance network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.488 2 DEBUG nova.network.neutron [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updating instance_info_cache with network_info: [{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.495 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2j0lu8a" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.535 2 DEBUG nova.storage.rbd_utils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.539 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config 18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.566 2 DEBUG oslo_concurrency.lockutils [req-f5155fa2-1f2a-45f0-88ef-650db41e47fa req-432bfbeb-c71f-4be6-ade6-03180e69a928 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.726 2 DEBUG oslo_concurrency.processutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config 18cfecd8-3017-4bde-906c-6b7784a3d544_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.727 2 INFO nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Deleting local config drive /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544/disk.config because it was imported into RBD.
Oct 10 10:18:55 compute-0 kernel: tapec07d396-e4: entered promiscuous mode
Oct 10 10:18:55 compute-0 NetworkManager[44849]: <info>  [1760091535.7761] manager: (tapec07d396-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 10 10:18:55 compute-0 ovn_controller[153080]: 2025-10-10T10:18:55Z|00056|binding|INFO|Claiming lport ec07d396-e4e9-4e94-a3ef-9957f5b321d0 for this chassis.
Oct 10 10:18:55 compute-0 ovn_controller[153080]: 2025-10-10T10:18:55Z|00057|binding|INFO|ec07d396-e4e9-4e94-a3ef-9957f5b321d0: Claiming fa:16:3e:9a:c3:f8 10.100.0.5
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.796 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:f8 10.100.0.5'], port_security=['fa:16:3e:9a:c3:f8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18cfecd8-3017-4bde-906c-6b7784a3d544', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5705612c-2460-43a1-a07d-7e0b37362a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cce0a410-e738-448e-8e6d-ae090e93401f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c8fc03-18b7-4d1f-bd65-563ec3f16e90, chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=ec07d396-e4e9-4e94-a3ef-9957f5b321d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.797 162925 INFO neutron.agent.ovn.metadata.agent [-] Port ec07d396-e4e9-4e94-a3ef-9957f5b321d0 in datapath 5705612c-2460-43a1-a07d-7e0b37362a21 bound to our chassis
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.798 162925 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5705612c-2460-43a1-a07d-7e0b37362a21
Oct 10 10:18:55 compute-0 systemd-udevd[279575]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.812 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ad36a9-0b1a-4bd3-b74d-f117632ad845]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.813 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5705612c-21 in ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.816 269344 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5705612c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.817 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[dae7ac2a-c0f0-4f98-a509-e204ce5ade17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 NetworkManager[44849]: <info>  [1760091535.8195] device (tapec07d396-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:18:55 compute-0 NetworkManager[44849]: <info>  [1760091535.8207] device (tapec07d396-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.818 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3368b4-1ab5-46d6-9c7c-bf97ab3fc7d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 systemd-machined[215425]: New machine qemu-4-instance-0000000a.
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.839 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[64fed0db-af94-4f48-b995-b789534c3bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:55 compute-0 ovn_controller[153080]: 2025-10-10T10:18:55Z|00058|binding|INFO|Setting lport ec07d396-e4e9-4e94-a3ef-9957f5b321d0 ovn-installed in OVS
Oct 10 10:18:55 compute-0 ovn_controller[153080]: 2025-10-10T10:18:55Z|00059|binding|INFO|Setting lport ec07d396-e4e9-4e94-a3ef-9957f5b321d0 up in Southbound
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.864 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[6b54a979-ab26-4e9c-add9-9ffb0bd8936e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 nova_compute[261329]: 2025-10-10 10:18:55.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.896 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[03f53090-1795-4c27-8a6e-d34eeb44120c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.900 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a2e90-dc0d-4c5f-a371-02569cf11e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 NetworkManager[44849]: <info>  [1760091535.9013] manager: (tap5705612c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.936 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[777fb124-1d4c-4ffe-b262-1ba9e1e4c71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.939 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dc1404-b8dd-4c63-a670-f90858c31f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 NetworkManager[44849]: <info>  [1760091535.9631] device (tap5705612c-20): carrier: link connected
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.970 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[d70edd41-8a46-4b56-b41d-5a0750e98859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:55 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:55.990 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[e4375169-3d3a-419c-a148-a2c8697c1665]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5705612c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:31:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443953, 'reachable_time': 35796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279609, 'error': None, 'target': 'ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.007 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[d42d6141-7088-4930-86a0-798849997c16]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:3197'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443953, 'tstamp': 443953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279610, 'error': None, 'target': 'ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.027 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d8c38f-9d10-4332-a5c4-eff464066bd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5705612c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:31:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443953, 'reachable_time': 35796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279611, 'error': None, 'target': 'ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.072 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[50d41c9f-5b05-4441-b6d6-51cca8ce0e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.132 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[0e05d2b6-a316-4864-87d9-bc63bb005bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.134 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5705612c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.135 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.135 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5705612c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:56 compute-0 NetworkManager[44849]: <info>  [1760091536.1384] manager: (tap5705612c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 10 10:18:56 compute-0 kernel: tap5705612c-20: entered promiscuous mode
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.142 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5705612c-20, col_values=(('external_ids', {'iface-id': '1291d48e-69ca-44bf-96cc-5277cf30eb8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:56 compute-0 ovn_controller[153080]: 2025-10-10T10:18:56Z|00060|binding|INFO|Releasing lport 1291d48e-69ca-44bf-96cc-5277cf30eb8f from this chassis (sb_readonly=0)
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.147 162925 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5705612c-2460-43a1-a07d-7e0b37362a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5705612c-2460-43a1-a07d-7e0b37362a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.149 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e840e3-48f3-41bd-9937-ff3ab9ddf868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.151 162925 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: global
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     log         /dev/log local0 debug
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     log-tag     haproxy-metadata-proxy-5705612c-2460-43a1-a07d-7e0b37362a21
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     user        root
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     group       root
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     maxconn     1024
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     pidfile     /var/lib/neutron/external/pids/5705612c-2460-43a1-a07d-7e0b37362a21.pid.haproxy
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     daemon
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: defaults
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     log global
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     mode http
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     option httplog
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     option dontlognull
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     option http-server-close
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     option forwardfor
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     retries                 3
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     timeout http-request    30s
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     timeout connect         30s
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     timeout client          32s
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     timeout server          32s
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     timeout http-keep-alive 30s
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: listen listener
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     bind 169.254.169.254:80
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:     http-request add-header X-OVN-Network-ID 5705612c-2460-43a1-a07d-7e0b37362a21
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:18:56 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:18:56.156 162925 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21', 'env', 'PROCESS_TAG=haproxy-5705612c-2460-43a1-a07d-7e0b37362a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5705612c-2460-43a1-a07d-7e0b37362a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.157 2 DEBUG nova.compute.manager [req-27ece47a-4632-4c7f-a736-7afb7839f2ba req-bb2c0a34-f698-480d-9513-b9b37452c2bc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.157 2 DEBUG oslo_concurrency.lockutils [req-27ece47a-4632-4c7f-a736-7afb7839f2ba req-bb2c0a34-f698-480d-9513-b9b37452c2bc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.157 2 DEBUG oslo_concurrency.lockutils [req-27ece47a-4632-4c7f-a736-7afb7839f2ba req-bb2c0a34-f698-480d-9513-b9b37452c2bc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.158 2 DEBUG oslo_concurrency.lockutils [req-27ece47a-4632-4c7f-a736-7afb7839f2ba req-bb2c0a34-f698-480d-9513-b9b37452c2bc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.158 2 DEBUG nova.compute.manager [req-27ece47a-4632-4c7f-a736-7afb7839f2ba req-bb2c0a34-f698-480d-9513-b9b37452c2bc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Processing event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:56 compute-0 ceph-mon[73551]: pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/475021902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:56 compute-0 podman[279686]: 2025-10-10 10:18:56.539502279 +0000 UTC m=+0.057896108 container create 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:56 compute-0 systemd[1]: Started libpod-conmon-0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819.scope.
Oct 10 10:18:56 compute-0 podman[279686]: 2025-10-10 10:18:56.505644299 +0000 UTC m=+0.024038208 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:18:56 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dbf624d22677d989226c08eed57ba78aea18c11f8eadded0adb503594c270e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:18:56 compute-0 podman[279686]: 2025-10-10 10:18:56.621976962 +0000 UTC m=+0.140370811 container init 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:18:56 compute-0 podman[279686]: 2025-10-10 10:18:56.627393795 +0000 UTC m=+0.145787624 container start 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 10:18:56 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [NOTICE]   (279706) : New worker (279708) forked
Oct 10 10:18:56 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [NOTICE]   (279706) : Loading success.
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.777 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.779 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091536.7783074, 18cfecd8-3017-4bde-906c-6b7784a3d544 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.779 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] VM Started (Lifecycle Event)
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.781 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.785 2 INFO nova.virt.libvirt.driver [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Instance spawned successfully.
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.785 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.802 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.807 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.810 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.810 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.811 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.811 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.811 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.812 2 DEBUG nova.virt.libvirt.driver [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.848 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.848 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091536.778524, 18cfecd8-3017-4bde-906c-6b7784a3d544 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.848 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] VM Paused (Lifecycle Event)
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.899 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.902 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091536.781279, 18cfecd8-3017-4bde-906c-6b7784a3d544 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.903 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] VM Resumed (Lifecycle Event)
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.925 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.927 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.941 2 INFO nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Took 10.80 seconds to spawn the instance on the hypervisor.
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.941 2 DEBUG nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:18:56 compute-0 nova_compute[261329]: 2025-10-10 10:18:56.953 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.007 2 INFO nova.compute.manager [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Took 11.93 seconds to build instance.
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.021 2 DEBUG oslo_concurrency.lockutils [None req-3d41df17-e14a-4ba8-a4ce-232a7fef22fc 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:57.187Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:18:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:57.187Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:18:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:18:57.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.233 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3962479693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:57] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:18:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:18:57] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:18:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:57.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:57 compute-0 nova_compute[261329]: 2025-10-10 10:18:57.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.262 2 DEBUG nova.compute.manager [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.262 2 DEBUG oslo_concurrency.lockutils [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.263 2 DEBUG oslo_concurrency.lockutils [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.263 2 DEBUG oslo_concurrency.lockutils [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.263 2 DEBUG nova.compute.manager [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] No waiting events found dispatching network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:18:58 compute-0 nova_compute[261329]: 2025-10-10 10:18:58.264 2 WARNING nova.compute.manager [req-de857dd3-0f20-4bb6-bfe3-3549c5c0142a req-654e2987-23eb-4caa-8276-1660b3cbb3e2 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received unexpected event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 for instance with vm_state active and task_state None.
Oct 10 10:18:58 compute-0 ceph-mon[73551]: pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.263 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.264 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:18:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:59.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:18:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:18:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:59.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330076369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.750 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.824 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:18:59 compute-0 nova_compute[261329]: 2025-10-10 10:18:59.824 2 DEBUG nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:18:59 compute-0 sudo[279743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:59 compute-0 sudo[279743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:59 compute-0 sudo[279743]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.018 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.021 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4426MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.022 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.022 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:00 compute-0 ceph-mon[73551]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:19:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/330076369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.432 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Instance 18cfecd8-3017-4bde-906c-6b7784a3d544 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.432 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.433 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.462 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:19:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:19:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712929211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.941 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.947 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:19:00 compute-0 nova_compute[261329]: 2025-10-10 10:19:00.965 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:19:01 compute-0 nova_compute[261329]: 2025-10-10 10:19:01.010 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:19:01 compute-0 nova_compute[261329]: 2025-10-10 10:19:01.011 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 10 10:19:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:19:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1712929211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:02 compute-0 podman[279793]: 2025-10-10 10:19:02.244131788 +0000 UTC m=+0.075495581 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:19:02 compute-0 podman[279794]: 2025-10-10 10:19:02.257152494 +0000 UTC m=+0.086237953 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:19:02 compute-0 podman[279795]: 2025-10-10 10:19:02.274106115 +0000 UTC m=+0.105122836 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:19:02 compute-0 ceph-mon[73551]: pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 10 10:19:02 compute-0 nova_compute[261329]: 2025-10-10 10:19:02.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:19:03 compute-0 NetworkManager[44849]: <info>  [1760091543.1286] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 10 10:19:03 compute-0 ovn_controller[153080]: 2025-10-10T10:19:03Z|00061|binding|INFO|Releasing lport 1291d48e-69ca-44bf-96cc-5277cf30eb8f from this chassis (sb_readonly=0)
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:03 compute-0 NetworkManager[44849]: <info>  [1760091543.1300] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 10 10:19:03 compute-0 sudo[279860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:03 compute-0 sudo[279860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:03 compute-0 sudo[279860]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:03 compute-0 ovn_controller[153080]: 2025-10-10T10:19:03Z|00062|binding|INFO|Releasing lport 1291d48e-69ca-44bf-96cc-5277cf30eb8f from this chassis (sb_readonly=0)
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:03 compute-0 sudo[279885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:19:03 compute-0 sudo[279885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3651707172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:03 compute-0 ceph-mon[73551]: pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:19:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4015442307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:03.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.505 2 DEBUG nova.compute.manager [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.507 2 DEBUG nova.compute.manager [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing instance network info cache due to event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.507 2 DEBUG oslo_concurrency.lockutils [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.508 2 DEBUG oslo_concurrency.lockutils [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:19:03 compute-0 nova_compute[261329]: 2025-10-10 10:19:03.508 2 DEBUG nova.network.neutron [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:19:03 compute-0 sudo[279885]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:03 compute-0 sudo[279941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:03 compute-0 sudo[279941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:03 compute-0 sudo[279941]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:03 compute-0 sudo[279966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 10:19:03 compute-0 sudo[279966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:04 compute-0 sudo[279966]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:19:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:19:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 10:19:04 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:04 compute-0 nova_compute[261329]: 2025-10-10 10:19:04.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:05 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:19:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:19:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:05.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:19:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:19:05 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:05.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:05 compute-0 nova_compute[261329]: 2025-10-10 10:19:05.485 2 DEBUG nova.network.neutron [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updated VIF entry in instance network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:19:05 compute-0 nova_compute[261329]: 2025-10-10 10:19:05.485 2 DEBUG nova.network.neutron [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updating instance_info_cache with network_info: [{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:19:05 compute-0 nova_compute[261329]: 2025-10-10 10:19:05.510 2 DEBUG oslo_concurrency.lockutils [req-d5eaf434-b2eb-49dc-9d6c-fbbdf4c2aea0 req-882968bd-26bd-47bd-977c-0ce37f4e5f88 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:19:06 compute-0 ceph-mon[73551]: pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:06 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:19:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:19:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:06 compute-0 sudo[280015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:06 compute-0 sudo[280015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:06 compute-0 sudo[280015]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:06 compute-0 sudo[280040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:19:06 compute-0 sudo[280040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:07.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.313292155 +0000 UTC m=+0.048994015 container create 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:19:07 compute-0 systemd[1]: Started libpod-conmon-13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1.scope.
Oct 10 10:19:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.292871733 +0000 UTC m=+0.028573613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:07] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 10 10:19:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:07] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Oct 10 10:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.41624183 +0000 UTC m=+0.151943770 container init 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:07 compute-0 ceph-mon[73551]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.431466407 +0000 UTC m=+0.167168297 container start 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.436025552 +0000 UTC m=+0.171727432 container attach 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:19:07 compute-0 hopeful_bell[280124]: 167 167
Oct 10 10:19:07 compute-0 systemd[1]: libpod-13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1.scope: Deactivated successfully.
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.44066743 +0000 UTC m=+0.176369280 container died 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b93f95eb4b97ab174067ad08d14dcd5b973c0d70f7ed610ea563ad3c54a58a14-merged.mount: Deactivated successfully.
Oct 10 10:19:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:07.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:07 compute-0 podman[280109]: 2025-10-10 10:19:07.48578759 +0000 UTC m=+0.221489440 container remove 13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bell, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:19:07 compute-0 systemd[1]: libpod-conmon-13a8a39094d86059c754e51e574ff41696daabcf731dae0bc51e496acc728ad1.scope: Deactivated successfully.
Oct 10 10:19:07 compute-0 podman[280147]: 2025-10-10 10:19:07.676259089 +0000 UTC m=+0.051330620 container create 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:19:07 compute-0 systemd[1]: Started libpod-conmon-7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f.scope.
Oct 10 10:19:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:07 compute-0 podman[280147]: 2025-10-10 10:19:07.656141927 +0000 UTC m=+0.031213478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:07 compute-0 podman[280147]: 2025-10-10 10:19:07.764659491 +0000 UTC m=+0.139731022 container init 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:19:07 compute-0 podman[280147]: 2025-10-10 10:19:07.771839719 +0000 UTC m=+0.146911240 container start 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 10:19:07 compute-0 podman[280147]: 2025-10-10 10:19:07.775561839 +0000 UTC m=+0.150633360 container attach 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:19:07 compute-0 nova_compute[261329]: 2025-10-10 10:19:07.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:08 compute-0 sleepy_leavitt[280163]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:19:08 compute-0 sleepy_leavitt[280163]: --> All data devices are unavailable
Oct 10 10:19:08 compute-0 systemd[1]: libpod-7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f.scope: Deactivated successfully.
Oct 10 10:19:08 compute-0 podman[280147]: 2025-10-10 10:19:08.141195568 +0000 UTC m=+0.516267089 container died 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f72b4331cc0beec407cb8fae59d30722218a0eda4f26e2628524c6e5b1d8c580-merged.mount: Deactivated successfully.
Oct 10 10:19:08 compute-0 podman[280147]: 2025-10-10 10:19:08.186724121 +0000 UTC m=+0.561795642 container remove 7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:19:08 compute-0 systemd[1]: libpod-conmon-7094240d31e37a097ca643098a716191990b99c2ee5e77c961007c810122df1f.scope: Deactivated successfully.
Oct 10 10:19:08 compute-0 sudo[280040]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:08 compute-0 sudo[280190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:08 compute-0 sudo[280190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:08 compute-0 sudo[280190]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:08 compute-0 sudo[280215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:19:08 compute-0 sudo[280215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.815994815 +0000 UTC m=+0.042415145 container create 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:08 compute-0 systemd[1]: Started libpod-conmon-1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50.scope.
Oct 10 10:19:08 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.885983308 +0000 UTC m=+0.112403688 container init 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.798057152 +0000 UTC m=+0.024477512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.899653475 +0000 UTC m=+0.126073805 container start 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.902994621 +0000 UTC m=+0.129415001 container attach 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:19:08 compute-0 nifty_bartik[280298]: 167 167
Oct 10 10:19:08 compute-0 systemd[1]: libpod-1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50.scope: Deactivated successfully.
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.905438529 +0000 UTC m=+0.131858869 container died 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-82044da9820983b1b9849e981fe17539f2c7657903759c677596be2f39bccd6c-merged.mount: Deactivated successfully.
Oct 10 10:19:08 compute-0 podman[280280]: 2025-10-10 10:19:08.949270198 +0000 UTC m=+0.175690548 container remove 1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:08 compute-0 systemd[1]: libpod-conmon-1eeaba0d26d47b44a08c4d7d89bd1badeeacfbb917ca88ae3ee3c1145e8baa50.scope: Deactivated successfully.
Oct 10 10:19:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 10 10:19:09 compute-0 podman[280324]: 2025-10-10 10:19:09.186797159 +0000 UTC m=+0.047421025 container create a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:09 compute-0 systemd[1]: Started libpod-conmon-a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a.scope.
Oct 10 10:19:09 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6826c8b7c7966b0a5426c2f5e2a111d428fe55caac98ed14dc8adc015a830a9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6826c8b7c7966b0a5426c2f5e2a111d428fe55caac98ed14dc8adc015a830a9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6826c8b7c7966b0a5426c2f5e2a111d428fe55caac98ed14dc8adc015a830a9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6826c8b7c7966b0a5426c2f5e2a111d428fe55caac98ed14dc8adc015a830a9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:09 compute-0 podman[280324]: 2025-10-10 10:19:09.168539716 +0000 UTC m=+0.029163602 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:09 compute-0 podman[280324]: 2025-10-10 10:19:09.276851533 +0000 UTC m=+0.137475399 container init a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:09 compute-0 podman[280324]: 2025-10-10 10:19:09.282220034 +0000 UTC m=+0.142843920 container start a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:19:09 compute-0 podman[280324]: 2025-10-10 10:19:09.285901292 +0000 UTC m=+0.146525188 container attach a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:19:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:09.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:09 compute-0 elated_galileo[280338]: {
Oct 10 10:19:09 compute-0 elated_galileo[280338]:     "0": [
Oct 10 10:19:09 compute-0 elated_galileo[280338]:         {
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "devices": [
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "/dev/loop3"
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             ],
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "lv_name": "ceph_lv0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "lv_size": "21470642176",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "name": "ceph_lv0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "tags": {
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.cluster_name": "ceph",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.crush_device_class": "",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.encrypted": "0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.osd_id": "0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.type": "block",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.vdo": "0",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:                 "ceph.with_tpm": "0"
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             },
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "type": "block",
Oct 10 10:19:09 compute-0 elated_galileo[280338]:             "vg_name": "ceph_vg0"
Oct 10 10:19:09 compute-0 elated_galileo[280338]:         }
Oct 10 10:19:09 compute-0 elated_galileo[280338]:     ]
Oct 10 10:19:09 compute-0 elated_galileo[280338]: }
Oct 10 10:19:09 compute-0 systemd[1]: libpod-a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a.scope: Deactivated successfully.
Oct 10 10:19:09 compute-0 nova_compute[261329]: 2025-10-10 10:19:09.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:09 compute-0 podman[280347]: 2025-10-10 10:19:09.644141965 +0000 UTC m=+0.026539967 container died a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6826c8b7c7966b0a5426c2f5e2a111d428fe55caac98ed14dc8adc015a830a9d-merged.mount: Deactivated successfully.
Oct 10 10:19:09 compute-0 podman[280347]: 2025-10-10 10:19:09.691258989 +0000 UTC m=+0.073656901 container remove a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_galileo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:19:09 compute-0 systemd[1]: libpod-conmon-a53b353f9cd1c7c0eeb04adbaf7cef21c7fe35a84270890ba07faa26f0ddc55a.scope: Deactivated successfully.
Oct 10 10:19:09 compute-0 sudo[280215]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:09 compute-0 sudo[280362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:09 compute-0 sudo[280362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:09 compute-0 sudo[280362]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:09 compute-0 sudo[280387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:19:09 compute-0 sudo[280387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:09 compute-0 ovn_controller[153080]: 2025-10-10T10:19:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:c3:f8 10.100.0.5
Oct 10 10:19:09 compute-0 ovn_controller[153080]: 2025-10-10T10:19:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:c3:f8 10.100.0.5
Oct 10 10:19:10 compute-0 ceph-mon[73551]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.335737718 +0000 UTC m=+0.059818349 container create 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:19:10 compute-0 systemd[1]: Started libpod-conmon-4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b.scope.
Oct 10 10:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.309384298 +0000 UTC m=+0.033464989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.413981046 +0000 UTC m=+0.138061657 container init 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.421562487 +0000 UTC m=+0.145643078 container start 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:19:10 compute-0 optimistic_clarke[280471]: 167 167
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.42508255 +0000 UTC m=+0.149163171 container attach 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:19:10 compute-0 systemd[1]: libpod-4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b.scope: Deactivated successfully.
Oct 10 10:19:10 compute-0 conmon[280471]: conmon 4f29ae2f54d11d019162 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b.scope/container/memory.events
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.427637171 +0000 UTC m=+0.151717772 container died 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a9fd8618a1b5ce5805fcb92d0ceffae49a0ef900250a05e9b080042e15769e8-merged.mount: Deactivated successfully.
Oct 10 10:19:10 compute-0 podman[280454]: 2025-10-10 10:19:10.467497994 +0000 UTC m=+0.191578585 container remove 4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 10 10:19:10 compute-0 systemd[1]: libpod-conmon-4f29ae2f54d11d0191628c0ca8ef9b1ee28bd80a261bf4f24d0011e11775284b.scope: Deactivated successfully.
Oct 10 10:19:10 compute-0 podman[280495]: 2025-10-10 10:19:10.678938063 +0000 UTC m=+0.061793374 container create 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:19:10 compute-0 systemd[1]: Started libpod-conmon-053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440.scope.
Oct 10 10:19:10 compute-0 podman[280495]: 2025-10-10 10:19:10.644429881 +0000 UTC m=+0.027285242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67176267ff119f7ab249c5aba77daeeb60bd30616366ff6f15b80fd7ec78f31d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67176267ff119f7ab249c5aba77daeeb60bd30616366ff6f15b80fd7ec78f31d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67176267ff119f7ab249c5aba77daeeb60bd30616366ff6f15b80fd7ec78f31d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67176267ff119f7ab249c5aba77daeeb60bd30616366ff6f15b80fd7ec78f31d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:19:10 compute-0 podman[280495]: 2025-10-10 10:19:10.782314962 +0000 UTC m=+0.165170253 container init 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:19:10 compute-0 podman[280495]: 2025-10-10 10:19:10.804387426 +0000 UTC m=+0.187242697 container start 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:19:10 compute-0 podman[280495]: 2025-10-10 10:19:10.808465836 +0000 UTC m=+0.191321147 container attach 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:19:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 10 10:19:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:11.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:11 compute-0 lvm[280587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:19:11 compute-0 lvm[280587]: VG ceph_vg0 finished
Oct 10 10:19:11 compute-0 peaceful_hellman[280511]: {}
Oct 10 10:19:11 compute-0 systemd[1]: libpod-053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440.scope: Deactivated successfully.
Oct 10 10:19:11 compute-0 systemd[1]: libpod-053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440.scope: Consumed 1.375s CPU time.
Oct 10 10:19:11 compute-0 podman[280495]: 2025-10-10 10:19:11.641428991 +0000 UTC m=+1.024284332 container died 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-67176267ff119f7ab249c5aba77daeeb60bd30616366ff6f15b80fd7ec78f31d-merged.mount: Deactivated successfully.
Oct 10 10:19:11 compute-0 podman[280495]: 2025-10-10 10:19:11.705429374 +0000 UTC m=+1.088284655 container remove 053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 10:19:11 compute-0 systemd[1]: libpod-conmon-053a57419012a12a863081c6d8077acaeadc94259db290d157f98ee5a255d440.scope: Deactivated successfully.
Oct 10 10:19:11 compute-0 sudo[280387]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:19:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:19:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:11 compute-0 sudo[280605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:19:11 compute-0 sudo[280605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:11 compute-0 sudo[280605]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:12 compute-0 ceph-mon[73551]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 10 10:19:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:12 compute-0 nova_compute[261329]: 2025-10-10 10:19:12.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:19:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:14 compute-0 ceph-mon[73551]: pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:19:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:14 compute-0 nova_compute[261329]: 2025-10-10 10:19:14.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:15.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:15 compute-0 nova_compute[261329]: 2025-10-10 10:19:15.915 2 INFO nova.compute.manager [None req-40d5a40d-afea-462d-909c-683cf012b9b1 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Get console output
Oct 10 10:19:15 compute-0 nova_compute[261329]: 2025-10-10 10:19:15.921 2 INFO oslo.privsep.daemon [None req-40d5a40d-afea-462d-909c-683cf012b9b1 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp49bj6zs2/privsep.sock']
Oct 10 10:19:16 compute-0 ceph-mon[73551]: pgmap v999: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:19:16
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.nfs', 'volumes', 'default.rgw.control', '.mgr', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups']
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:19:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:19:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.653 2 INFO oslo.privsep.daemon [None req-40d5a40d-afea-462d-909c-683cf012b9b1 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.495 2054 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.499 2054 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.502 2054 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.502 2054 INFO oslo.privsep.daemon [-] privsep daemon running as pid 2054
Oct 10 10:19:16 compute-0 nova_compute[261329]: 2025-10-10 10:19:16.791 2054 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007567930137722212 of space, bias 1.0, pg target 0.22703790413166636 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:19:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:19:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:17.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:17.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:17 compute-0 podman[280642]: 2025-10-10 10:19:17.222724232 +0000 UTC m=+0.067441403 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:19:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:17.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:19:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:17] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:19:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:17.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:17 compute-0 ovn_controller[153080]: 2025-10-10T10:19:17Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:c3:f8 10.100.0.5
Oct 10 10:19:17 compute-0 nova_compute[261329]: 2025-10-10 10:19:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:18 compute-0 ceph-mon[73551]: pgmap v1000: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:18 compute-0 ovn_controller[153080]: 2025-10-10T10:19:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:c3:f8 10.100.0.5
Oct 10 10:19:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:19.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:19.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:19 compute-0 nova_compute[261329]: 2025-10-10 10:19:19.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:20 compute-0 sudo[280664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:19:20 compute-0 sudo[280664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:20 compute-0 sudo[280664]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:20 compute-0 ceph-mon[73551]: pgmap v1001: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:21.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:21.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:22 compute-0 ceph-mon[73551]: pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.460 2 DEBUG nova.compute.manager [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.460 2 DEBUG nova.compute.manager [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing instance network info cache due to event network-changed-ec07d396-e4e9-4e94-a3ef-9957f5b321d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.460 2 DEBUG oslo_concurrency.lockutils [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.460 2 DEBUG oslo_concurrency.lockutils [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.460 2 DEBUG nova.network.neutron [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Refreshing network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.532 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.532 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.533 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.533 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.533 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.532 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.534 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.534 2 INFO nova.compute.manager [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Terminating instance
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.535 2 DEBUG nova.compute.manager [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 kernel: tapec07d396-e4 (unregistering): left promiscuous mode
Oct 10 10:19:22 compute-0 NetworkManager[44849]: <info>  [1760091562.5936] device (tapec07d396-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 ovn_controller[153080]: 2025-10-10T10:19:22Z|00063|binding|INFO|Releasing lport ec07d396-e4e9-4e94-a3ef-9957f5b321d0 from this chassis (sb_readonly=0)
Oct 10 10:19:22 compute-0 ovn_controller[153080]: 2025-10-10T10:19:22Z|00064|binding|INFO|Setting lport ec07d396-e4e9-4e94-a3ef-9957f5b321d0 down in Southbound
Oct 10 10:19:22 compute-0 ovn_controller[153080]: 2025-10-10T10:19:22Z|00065|binding|INFO|Removing iface tapec07d396-e4 ovn-installed in OVS
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.621 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:f8 10.100.0.5'], port_security=['fa:16:3e:9a:c3:f8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18cfecd8-3017-4bde-906c-6b7784a3d544', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5705612c-2460-43a1-a07d-7e0b37362a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cce0a410-e738-448e-8e6d-ae090e93401f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c8fc03-18b7-4d1f-bd65-563ec3f16e90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=ec07d396-e4e9-4e94-a3ef-9957f5b321d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.622 162925 INFO neutron.agent.ovn.metadata.agent [-] Port ec07d396-e4e9-4e94-a3ef-9957f5b321d0 in datapath 5705612c-2460-43a1-a07d-7e0b37362a21 unbound from our chassis
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.623 162925 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5705612c-2460-43a1-a07d-7e0b37362a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.625 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdd1b1d-7322-4353-9344-90786772a706]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.626 162925 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21 namespace which is not needed anymore
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 10 10:19:22 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 13.326s CPU time.
Oct 10 10:19:22 compute-0 systemd-machined[215425]: Machine qemu-4-instance-0000000a terminated.
Oct 10 10:19:22 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [NOTICE]   (279706) : haproxy version is 2.8.14-c23fe91
Oct 10 10:19:22 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [NOTICE]   (279706) : path to executable is /usr/sbin/haproxy
Oct 10 10:19:22 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [WARNING]  (279706) : Exiting Master process...
Oct 10 10:19:22 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [ALERT]    (279706) : Current worker (279708) exited with code 143 (Terminated)
Oct 10 10:19:22 compute-0 neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21[279702]: [WARNING]  (279706) : All workers exited. Exiting... (0)
Oct 10 10:19:22 compute-0 systemd[1]: libpod-0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819.scope: Deactivated successfully.
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 podman[280718]: 2025-10-10 10:19:22.768928735 +0000 UTC m=+0.046270487 container died 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.792 2 INFO nova.virt.libvirt.driver [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Instance destroyed successfully.
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.793 2 DEBUG nova.objects.instance [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 18cfecd8-3017-4bde-906c-6b7784a3d544 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819-userdata-shm.mount: Deactivated successfully.
Oct 10 10:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dbf624d22677d989226c08eed57ba78aea18c11f8eadded0adb503594c270e6-merged.mount: Deactivated successfully.
Oct 10 10:19:22 compute-0 podman[280718]: 2025-10-10 10:19:22.809725866 +0000 UTC m=+0.087067608 container cleanup 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.814 2 DEBUG nova.virt.libvirt.vif [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-376763456',display_name='tempest-TestNetworkBasicOps-server-376763456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-376763456',id=10,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMl05Q2kd4BnjEfpYur+319mg1yfy+wiVVa4alZXTZtYpVK6CwqSnYm5UoWAZUverDsKq1NJbLqzumWnGU1ynqtdWGl+B5lKE95q4mdEJoro52IKXn5aBncuPkGARRZT9g==',key_name='tempest-TestNetworkBasicOps-2037591979',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:18:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-5tcdlcdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:18:56Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=18cfecd8-3017-4bde-906c-6b7784a3d544,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.814 2 DEBUG nova.network.os_vif_util [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.815 2 DEBUG nova.network.os_vif_util [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.816 2 DEBUG os_vif [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec07d396-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.824 2 INFO os_vif [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=ec07d396-e4e9-4e94-a3ef-9957f5b321d0,network=Network(5705612c-2460-43a1-a07d-7e0b37362a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec07d396-e4')
Oct 10 10:19:22 compute-0 systemd[1]: libpod-conmon-0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819.scope: Deactivated successfully.
Oct 10 10:19:22 compute-0 podman[280751]: 2025-10-10 10:19:22.886781236 +0000 UTC m=+0.048061785 container remove 0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.894 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0143cd-3680-485f-805f-24c7a7b39b2d]: (4, ('Fri Oct 10 10:19:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21 (0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819)\n0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819\nFri Oct 10 10:19:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21 (0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819)\n0060e8ba27eba63fa98d368ad29e762e1759c22135848003804cc64ebef3b819\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.897 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e6815d-2917-45fb-a726-82eca6281435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.898 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5705612c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:19:22 compute-0 kernel: tap5705612c-20: left promiscuous mode
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.919 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[ddca66b8-9522-406d-822b-cba2f65589d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 nova_compute[261329]: 2025-10-10 10:19:22.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.949 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b53aa1-0d8f-4d79-b2a4-58c3bb264ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.954 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[cf63e959-3a61-466f-af7d-66c32a1a8837]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.973 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e59ede-21da-417a-8181-c1497b845002]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443946, 'reachable_time': 29907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280788, 'error': None, 'target': 'ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.977 163038 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5705612c-2460-43a1-a07d-7e0b37362a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:19:22 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:22.977 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[8a311b49-54e3-4c56-b628-eb9cb39cdd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:19:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d5705612c\x2d2460\x2d43a1\x2da07d\x2d7e0b37362a21.mount: Deactivated successfully.
Oct 10 10:19:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.311 2 INFO nova.virt.libvirt.driver [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Deleting instance files /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544_del
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.312 2 INFO nova.virt.libvirt.driver [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Deletion of /var/lib/nova/instances/18cfecd8-3017-4bde-906c-6b7784a3d544_del complete
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.377 2 INFO nova.compute.manager [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.377 2 DEBUG oslo.service.loopingcall [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.378 2 DEBUG nova.compute.manager [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.378 2 DEBUG nova.network.neutron [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:19:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:23.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.551 2 DEBUG nova.compute.manager [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-unplugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.551 2 DEBUG oslo_concurrency.lockutils [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.552 2 DEBUG oslo_concurrency.lockutils [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.552 2 DEBUG oslo_concurrency.lockutils [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.552 2 DEBUG nova.compute.manager [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] No waiting events found dispatching network-vif-unplugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:19:23 compute-0 nova_compute[261329]: 2025-10-10 10:19:23.552 2 DEBUG nova.compute.manager [req-368e1492-c1e3-4564-9265-ea3ca1b0efd9 req-1d626955-df80-4c7d-890e-96ff44f6cfec 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-unplugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.254 2 DEBUG nova.network.neutron [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updated VIF entry in instance network info cache for port ec07d396-e4e9-4e94-a3ef-9957f5b321d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.254 2 DEBUG nova.network.neutron [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updating instance_info_cache with network_info: [{"id": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "address": "fa:16:3e:9a:c3:f8", "network": {"id": "5705612c-2460-43a1-a07d-7e0b37362a21", "bridge": "br-int", "label": "tempest-network-smoke--1182594036", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec07d396-e4", "ovs_interfaceid": "ec07d396-e4e9-4e94-a3ef-9957f5b321d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:19:24 compute-0 ceph-mon[73551]: pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.284 2 DEBUG oslo_concurrency.lockutils [req-92861466-d46e-421c-a908-d08cb3a58548 req-be1aae25-3f65-46e1-ae71-66583324e22e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-18cfecd8-3017-4bde-906c-6b7784a3d544" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.293 2 DEBUG nova.network.neutron [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.310 2 INFO nova.compute.manager [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Took 0.93 seconds to deallocate network for instance.
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.359 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.360 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.441 2 DEBUG oslo_concurrency.processutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:19:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:19:24 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257682561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.903 2 DEBUG oslo_concurrency.processutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.911 2 DEBUG nova.compute.provider_tree [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.944 2 DEBUG nova.scheduler.client.report [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:19:24 compute-0 nova_compute[261329]: 2025-10-10 10:19:24.972 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.010 2 INFO nova.scheduler.client.report [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 18cfecd8-3017-4bde-906c-6b7784a3d544
Oct 10 10:19:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.111 2 DEBUG oslo_concurrency.lockutils [None req-e263945f-dc11-4119-8a6f-c20189efbc31 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2257682561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:25.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:25.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.673 2 DEBUG nova.compute.manager [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.673 2 DEBUG oslo_concurrency.lockutils [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.674 2 DEBUG oslo_concurrency.lockutils [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.674 2 DEBUG oslo_concurrency.lockutils [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "18cfecd8-3017-4bde-906c-6b7784a3d544-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.674 2 DEBUG nova.compute.manager [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] No waiting events found dispatching network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.674 2 WARNING nova.compute.manager [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received unexpected event network-vif-plugged-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 for instance with vm_state deleted and task_state None.
Oct 10 10:19:25 compute-0 nova_compute[261329]: 2025-10-10 10:19:25.675 2 DEBUG nova.compute.manager [req-1af20754-837e-4fa9-89f7-51d45e74434d req-533f2944-419a-4b1d-8f47-445d05507856 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Received event network-vif-deleted-ec07d396-e4e9-4e94-a3ef-9957f5b321d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:19:26 compute-0 ceph-mon[73551]: pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:19:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:19:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:19:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:19:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:27.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:19:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:19:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:19:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:27.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:19:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:27] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:19:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:27 compute-0 nova_compute[261329]: 2025-10-10 10:19:27.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:27 compute-0 nova_compute[261329]: 2025-10-10 10:19:27.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:28 compute-0 ceph-mon[73551]: pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:28 compute-0 nova_compute[261329]: 2025-10-10 10:19:28.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:28 compute-0 nova_compute[261329]: 2025-10-10 10:19:28.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 17 KiB/s wr, 30 op/s
Oct 10 10:19:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:29.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:30 compute-0 ceph-mon[73551]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 17 KiB/s wr, 30 op/s
Oct 10 10:19:30 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:30.536 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:19:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:19:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:31 compute-0 ceph-mon[73551]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:32 compute-0 nova_compute[261329]: 2025-10-10 10:19:32.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:32 compute-0 nova_compute[261329]: 2025-10-10 10:19:32.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:33 compute-0 podman[280823]: 2025-10-10 10:19:33.235248029 +0000 UTC m=+0.076373349 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 10 10:19:33 compute-0 podman[280824]: 2025-10-10 10:19:33.259925376 +0000 UTC m=+0.100095196 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 10:19:33 compute-0 podman[280825]: 2025-10-10 10:19:33.279316905 +0000 UTC m=+0.102674618 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:19:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:34 compute-0 ceph-mon[73551]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:35 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:19:36 compute-0 ceph-mon[73551]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:37.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:37.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:19:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:37] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:19:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:37] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Oct 10 10:19:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000065s ======
Oct 10 10:19:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct 10 10:19:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:37.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:37 compute-0 nova_compute[261329]: 2025-10-10 10:19:37.788 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091562.7865674, 18cfecd8-3017-4bde-906c-6b7784a3d544 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:19:37 compute-0 nova_compute[261329]: 2025-10-10 10:19:37.789 2 INFO nova.compute.manager [-] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] VM Stopped (Lifecycle Event)
Oct 10 10:19:37 compute-0 nova_compute[261329]: 2025-10-10 10:19:37.838 2 DEBUG nova.compute.manager [None req-a0a86984-72c5-474f-8534-fe526d4d517e - - - - - -] [instance: 18cfecd8-3017-4bde-906c-6b7784a3d544] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:19:37 compute-0 nova_compute[261329]: 2025-10-10 10:19:37.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:37 compute-0 nova_compute[261329]: 2025-10-10 10:19:37.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:38 compute-0 ceph-mon[73551]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:19:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:40 compute-0 sudo[280896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:19:40 compute-0 sudo[280896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:40 compute-0 sudo[280896]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:40 compute-0 ceph-mon[73551]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:19:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:41.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:41.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:41.908 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:19:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:42 compute-0 ceph-mon[73551]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:42 compute-0 nova_compute[261329]: 2025-10-10 10:19:42.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:42 compute-0 nova_compute[261329]: 2025-10-10 10:19:42.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:43.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:44 compute-0 ceph-mon[73551]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:45.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:45.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:46 compute-0 ceph-mon[73551]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:19:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:19:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:19:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:47.197Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:19:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:47.197Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:47.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:19:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:47] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 10 10:19:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:47] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 10 10:19:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:47.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:47.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:47 compute-0 nova_compute[261329]: 2025-10-10 10:19:47.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:47 compute-0 nova_compute[261329]: 2025-10-10 10:19:47.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:48 compute-0 podman[280931]: 2025-10-10 10:19:48.22503082 +0000 UTC m=+0.054719807 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:19:48 compute-0 ceph-mon[73551]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2667140307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:49.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:50 compute-0 ceph-mon[73551]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4063882823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:19:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:51.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:52 compute-0 ceph-mon[73551]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1076408673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:19:52 compute-0 nova_compute[261329]: 2025-10-10 10:19:52.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:52 compute-0 nova_compute[261329]: 2025-10-10 10:19:52.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:53.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:54 compute-0 ceph-mon[73551]: pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:56 compute-0 nova_compute[261329]: 2025-10-10 10:19:56.011 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:56 compute-0 nova_compute[261329]: 2025-10-10 10:19:56.011 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:19:56 compute-0 nova_compute[261329]: 2025-10-10 10:19:56.011 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:19:56 compute-0 nova_compute[261329]: 2025-10-10 10:19:56.028 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:19:56 compute-0 ceph-mon[73551]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:57.199Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:19:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:57.199Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:19:57.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:19:57 compute-0 nova_compute[261329]: 2025-10-10 10:19:57.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:57 compute-0 nova_compute[261329]: 2025-10-10 10:19:57.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:57] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 10 10:19:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:19:57] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 10 10:19:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:57.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:19:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:19:57 compute-0 nova_compute[261329]: 2025-10-10 10:19:57.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:57 compute-0 nova_compute[261329]: 2025-10-10 10:19:57.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:58 compute-0 nova_compute[261329]: 2025-10-10 10:19:58.232 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:58 compute-0 nova_compute[261329]: 2025-10-10 10:19:58.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:58 compute-0 nova_compute[261329]: 2025-10-10 10:19:58.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:58 compute-0 ceph-mon[73551]: pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.267 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.268 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.268 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.269 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.269 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:19:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:59.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:19:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:19:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:19:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:19:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969596316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.803 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:19:59 compute-0 nova_compute[261329]: 2025-10-10 10:19:59.999 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:20:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.000 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.001 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.001 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.082 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.083 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.104 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:00 compute-0 sudo[280984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:00 compute-0 sudo[280984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:00 compute-0 sudo[280984]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:00 compute-0 ceph-mon[73551]: pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:20:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3969596316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:00 compute-0 ceph-mon[73551]: overall HEALTH_OK
Oct 10 10:20:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574646968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.590 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.598 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.623 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.668 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:20:00 compute-0 nova_compute[261329]: 2025-10-10 10:20:00.669 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:20:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:20:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3574646968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:01.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:01 compute-0 nova_compute[261329]: 2025-10-10 10:20:01.672 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:01 compute-0 nova_compute[261329]: 2025-10-10 10:20:01.673 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:20:02 compute-0 nova_compute[261329]: 2025-10-10 10:20:02.234 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:02 compute-0 ceph-mon[73551]: pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:20:02 compute-0 nova_compute[261329]: 2025-10-10 10:20:02.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:02 compute-0 nova_compute[261329]: 2025-10-10 10:20:02.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:20:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2835618541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:03 compute-0 ceph-mon[73551]: pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:20:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:03.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:04 compute-0 podman[281035]: 2025-10-10 10:20:04.224237852 +0000 UTC m=+0.058486938 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 10 10:20:04 compute-0 podman[281034]: 2025-10-10 10:20:04.236347899 +0000 UTC m=+0.067399022 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:20:04 compute-0 podman[281036]: 2025-10-10 10:20:04.29026638 +0000 UTC m=+0.124507765 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:20:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1396745541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1513687594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/190815280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:05.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:05 compute-0 ceph-mon[73551]: pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:05 compute-0 ovn_controller[153080]: 2025-10-10T10:20:05Z|00066|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 10 10:20:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:20:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:05.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:20:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1617462293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:07.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:20:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:07] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 10 10:20:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:07.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:07 compute-0 ceph-mon[73551]: pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:20:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:20:07 compute-0 nova_compute[261329]: 2025-10-10 10:20:07.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:07 compute-0 nova_compute[261329]: 2025-10-10 10:20:07.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 10 10:20:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:09.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:09.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:10 compute-0 ceph-mon[73551]: pgmap v1026: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 10 10:20:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1482921149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1189993806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:11.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:11 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=infra.usagestats t=2025-10-10T10:20:11.772777958Z level=info msg="Usage stats are ready to report"
Oct 10 10:20:12 compute-0 sudo[281105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:12 compute-0 sudo[281105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:12 compute-0 ceph-mon[73551]: pgmap v1027: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:12 compute-0 sudo[281105]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:12 compute-0 sudo[281131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:20:12 compute-0 sudo[281131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:12 compute-0 podman[281231]: 2025-10-10 10:20:12.876525025 +0000 UTC m=+0.094131456 container exec 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:20:12 compute-0 nova_compute[261329]: 2025-10-10 10:20:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-0 podman[281231]: 2025-10-10 10:20:12.983755367 +0000 UTC m=+0.201361728 container exec_died 2dc12dfc814366723294aefb431c1abe614e7ea7bb48fbb65f2ef3d4d9a0e79e (image=quay.io/ceph/ceph:v19, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:20:12 compute-0 nova_compute[261329]: 2025-10-10 10:20:12.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 10 10:20:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:13 compute-0 podman[281353]: 2025-10-10 10:20:13.492143173 +0000 UTC m=+0.069749407 container exec 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:13 compute-0 podman[281353]: 2025-10-10 10:20:13.504625241 +0000 UTC m=+0.082231455 container exec_died 9d8ec43ed60478f588e78e0d7e73fb3ddd4897ff172c2a182f3f3ed6b7edaf7b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:13.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:14 compute-0 podman[281491]: 2025-10-10 10:20:14.093487835 +0000 UTC m=+0.044672106 container exec 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:20:14 compute-0 podman[281491]: 2025-10-10 10:20:14.099951152 +0000 UTC m=+0.051135403 container exec_died 8e453d2a63653fdb6aebf0cd78a8120a2c11f04385b8b7efe22c2fbdcbd19be6 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-0-gptveb)
Oct 10 10:20:14 compute-0 ceph-mon[73551]: pgmap v1028: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 10 10:20:14 compute-0 podman[281558]: 2025-10-10 10:20:14.297550198 +0000 UTC m=+0.047699463 container exec 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Oct 10 10:20:14 compute-0 podman[281558]: 2025-10-10 10:20:14.312552117 +0000 UTC m=+0.062701382 container exec_died 1155bdb4eca08fd5761322bfde5c75f2cdfff547573cc87b914d5ad4cc9e8213 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-0-mciijj, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Oct 10 10:20:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:14 compute-0 podman[281622]: 2025-10-10 10:20:14.506511588 +0000 UTC m=+0.048186709 container exec e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:14 compute-0 podman[281622]: 2025-10-10 10:20:14.533643094 +0000 UTC m=+0.075318195 container exec_died e66dd3fafc73a254f9980714bce6fe60f401f220b6b4860d8dab7967253f8b1a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:14 compute-0 podman[281698]: 2025-10-10 10:20:14.733032028 +0000 UTC m=+0.055474243 container exec 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:20:14 compute-0 podman[281698]: 2025-10-10 10:20:14.903712025 +0000 UTC m=+0.226154240 container exec_died 78408a16a933cba025d0dc387367fc0527ca690021bb3487e5e6ff0bb3bbb135 (image=quay.io/ceph/grafana:10.4.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 10 10:20:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:15 compute-0 podman[281811]: 2025-10-10 10:20:15.305535619 +0000 UTC m=+0.049809510 container exec fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:15 compute-0 podman[281811]: 2025-10-10 10:20:15.345881827 +0000 UTC m=+0.090155758 container exec_died fd9b1d051712bd4aa866ae00fcbedc537b2565a881c907461a3a581bdfcbe056 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:20:15 compute-0 sudo[281131]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:20:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:20:15 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:15.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:15 compute-0 sudo[281852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:15 compute-0 sudo[281852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:15 compute-0 sudo[281852]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:15 compute-0 sudo[281877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:20:15 compute-0 sudo[281877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:16 compute-0 sudo[281877]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: pgmap v1029: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-0 sudo[281934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:16 compute-0 sudo[281934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:16 compute-0 sudo[281934]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:16 compute-0 sudo[281959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:20:16 compute-0 sudo[281959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:20:16
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'vms', '.nfs']
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:20:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:16 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.691316288 +0000 UTC m=+0.049445479 container create 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 10:20:16 compute-0 systemd[1]: Started libpod-conmon-3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665.scope.
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.666998412 +0000 UTC m=+0.025127623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:16 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.782799867 +0000 UTC m=+0.140929088 container init 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.790932738 +0000 UTC m=+0.149061929 container start 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.794862453 +0000 UTC m=+0.152991654 container attach 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:20:16 compute-0 conmon[282042]: conmon 3e2e80746eb74d45c78a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665.scope/container/memory.events
Oct 10 10:20:16 compute-0 systemd[1]: libpod-3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665.scope: Deactivated successfully.
Oct 10 10:20:16 compute-0 quirky_pasteur[282042]: 167 167
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.796375291 +0000 UTC m=+0.154504472 container died 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 10:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0a635af524dabc7d68d3b9e5c84a33131d0b5e3602c7cba1718aa7ef083f4cf-merged.mount: Deactivated successfully.
Oct 10 10:20:16 compute-0 podman[282026]: 2025-10-10 10:20:16.844153846 +0000 UTC m=+0.202283017 container remove 3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:20:16 compute-0 systemd[1]: libpod-conmon-3e2e80746eb74d45c78a7df7e5b7aa107abb0b1cee80c589be155dcfdaf3d665.scope: Deactivated successfully.
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:20:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.00032258 +0000 UTC m=+0.038128279 container create fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:20:17 compute-0 systemd[1]: Started libpod-conmon-fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e.scope.
Oct 10 10:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:16.985229408 +0000 UTC m=+0.023035116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.08991982 +0000 UTC m=+0.127725538 container init fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.101059676 +0000 UTC m=+0.138865384 container start fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.104872527 +0000 UTC m=+0.142678215 container attach fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:20:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:17.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:20:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:17.202Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:20:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:17.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:17 compute-0 ceph-mon[73551]: pgmap v1030: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 10 10:20:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:17 compute-0 ceph-mon[73551]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 10 10:20:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:20:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:20:17 compute-0 eager_bohr[282086]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:20:17 compute-0 eager_bohr[282086]: --> All data devices are unavailable
Oct 10 10:20:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:20:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:20:17 compute-0 systemd[1]: libpod-fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e.scope: Deactivated successfully.
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.468501213 +0000 UTC m=+0.506306921 container died fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-870f46745578304c7c12565fee678978e62e849672c9803f9f6ea833d401468f-merged.mount: Deactivated successfully.
Oct 10 10:20:17 compute-0 podman[282069]: 2025-10-10 10:20:17.508473718 +0000 UTC m=+0.546279406 container remove fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:20:17 compute-0 systemd[1]: libpod-conmon-fcf97d7330abc313be81884b5e4333161a925f1ef96e71ecdcd02dcb21d4319e.scope: Deactivated successfully.
Oct 10 10:20:17 compute-0 sudo[281959]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:17 compute-0 sudo[282114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:17 compute-0 sudo[282114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:17 compute-0 sudo[282114]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:17 compute-0 sudo[282139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:20:17 compute-0 sudo[282139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:17 compute-0 nova_compute[261329]: 2025-10-10 10:20:17.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:18 compute-0 nova_compute[261329]: 2025-10-10 10:20:18.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.064871736 +0000 UTC m=+0.049464350 container create 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:20:18 compute-0 systemd[1]: Started libpod-conmon-928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707.scope.
Oct 10 10:20:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Oct 10 10:20:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.038134553 +0000 UTC m=+0.022727257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.146949606 +0000 UTC m=+0.131542240 container init 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.156060567 +0000 UTC m=+0.140653181 container start 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.159530678 +0000 UTC m=+0.144123332 container attach 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 10:20:18 compute-0 admiring_panini[282220]: 167 167
Oct 10 10:20:18 compute-0 systemd[1]: libpod-928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707.scope: Deactivated successfully.
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.162029327 +0000 UTC m=+0.146621951 container died 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4995f0a8108de3f89e153d9a9b9e891bc496cd6d7b00eda3c4043f42280f0de-merged.mount: Deactivated successfully.
Oct 10 10:20:18 compute-0 podman[282204]: 2025-10-10 10:20:18.204635277 +0000 UTC m=+0.189227901 container remove 928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_panini, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:20:18 compute-0 systemd[1]: libpod-conmon-928f8c3d602367440f35720f607cc118d5f725bfb998194229215e24f30f1707.scope: Deactivated successfully.
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.40572934 +0000 UTC m=+0.043429600 container create 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:20:18 compute-0 systemd[1]: Started libpod-conmon-495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf.scope.
Oct 10 10:20:18 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fd9da6db434cceb12b972072f21442cf6c27f0da4a771b2dfbcee650464c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fd9da6db434cceb12b972072f21442cf6c27f0da4a771b2dfbcee650464c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fd9da6db434cceb12b972072f21442cf6c27f0da4a771b2dfbcee650464c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fd9da6db434cceb12b972072f21442cf6c27f0da4a771b2dfbcee650464c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.47873794 +0000 UTC m=+0.116438250 container init 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.388784836 +0000 UTC m=+0.026485116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.485571444 +0000 UTC m=+0.123271724 container start 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.488563849 +0000 UTC m=+0.126264119 container attach 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:20:18 compute-0 podman[282261]: 2025-10-10 10:20:18.508207377 +0000 UTC m=+0.062849230 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:20:18 compute-0 zen_solomon[282265]: {
Oct 10 10:20:18 compute-0 zen_solomon[282265]:     "0": [
Oct 10 10:20:18 compute-0 zen_solomon[282265]:         {
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "devices": [
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "/dev/loop3"
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             ],
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "lv_name": "ceph_lv0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "lv_size": "21470642176",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "name": "ceph_lv0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "tags": {
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.cluster_name": "ceph",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.crush_device_class": "",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.encrypted": "0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.osd_id": "0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.type": "block",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.vdo": "0",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:                 "ceph.with_tpm": "0"
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             },
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "type": "block",
Oct 10 10:20:18 compute-0 zen_solomon[282265]:             "vg_name": "ceph_vg0"
Oct 10 10:20:18 compute-0 zen_solomon[282265]:         }
Oct 10 10:20:18 compute-0 zen_solomon[282265]:     ]
Oct 10 10:20:18 compute-0 zen_solomon[282265]: }
Oct 10 10:20:18 compute-0 systemd[1]: libpod-495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf.scope: Deactivated successfully.
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.772178503 +0000 UTC m=+0.409878773 container died 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-845fd9da6db434cceb12b972072f21442cf6c27f0da4a771b2dfbcee650464c0-merged.mount: Deactivated successfully.
Oct 10 10:20:18 compute-0 podman[282247]: 2025-10-10 10:20:18.821240238 +0000 UTC m=+0.458940498 container remove 495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:20:18 compute-0 systemd[1]: libpod-conmon-495351c60147a2f219dbea45dc2204d50b6e1b14be2b9654ae0ddf5aaa310dcf.scope: Deactivated successfully.
Oct 10 10:20:18 compute-0 sudo[282139]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:18 compute-0 sudo[282307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:18 compute-0 sudo[282307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:18 compute-0 sudo[282307]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:19 compute-0 sudo[282332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:20:19 compute-0 sudo[282332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:19 compute-0 ceph-mon[73551]: pgmap v1031: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Oct 10 10:20:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.425916766 +0000 UTC m=+0.041753296 container create a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 10:20:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:19.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:19 compute-0 systemd[1]: Started libpod-conmon-a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb.scope.
Oct 10 10:20:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.407761954 +0000 UTC m=+0.023598504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.516969435 +0000 UTC m=+0.132805995 container init a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.530834601 +0000 UTC m=+0.146671131 container start a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.534650211 +0000 UTC m=+0.150486761 container attach a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:20:19 compute-0 romantic_shtern[282416]: 167 167
Oct 10 10:20:19 compute-0 systemd[1]: libpod-a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb.scope: Deactivated successfully.
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.538395309 +0000 UTC m=+0.154231849 container died a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0061a4e265a27606c5d852e980e9a9799f7cbe374663c6958bce38630ea85006-merged.mount: Deactivated successfully.
Oct 10 10:20:19 compute-0 podman[282400]: 2025-10-10 10:20:19.579782093 +0000 UTC m=+0.195618633 container remove a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:20:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:20:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:20:19 compute-0 systemd[1]: libpod-conmon-a087c2b3598114bacab66dbf6c07a2137a1624f29424f846afddcb4aa356f6eb.scope: Deactivated successfully.
Oct 10 10:20:19 compute-0 podman[282440]: 2025-10-10 10:20:19.759947608 +0000 UTC m=+0.041103815 container create b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:20:19 compute-0 systemd[1]: Started libpod-conmon-b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda.scope.
Oct 10 10:20:19 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f25aad47899f85b4377e0f0ca3db50cfa597345b4005a9e1017668bbc6b7d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f25aad47899f85b4377e0f0ca3db50cfa597345b4005a9e1017668bbc6b7d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f25aad47899f85b4377e0f0ca3db50cfa597345b4005a9e1017668bbc6b7d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:19 compute-0 podman[282440]: 2025-10-10 10:20:19.741238719 +0000 UTC m=+0.022394926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f25aad47899f85b4377e0f0ca3db50cfa597345b4005a9e1017668bbc6b7d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:19 compute-0 podman[282440]: 2025-10-10 10:20:19.850887324 +0000 UTC m=+0.132043581 container init b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:20:19 compute-0 podman[282440]: 2025-10-10 10:20:19.857198792 +0000 UTC m=+0.138355009 container start b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:20:19 compute-0 podman[282440]: 2025-10-10 10:20:19.860835846 +0000 UTC m=+0.141992093 container attach b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:20:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:20 compute-0 sudo[282509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:20 compute-0 sudo[282509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:20 compute-0 sudo[282509]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:20 compute-0 lvm[282556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:20:20 compute-0 lvm[282556]: VG ceph_vg0 finished
Oct 10 10:20:20 compute-0 lucid_kapitsa[282456]: {}
Oct 10 10:20:20 compute-0 systemd[1]: libpod-b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda.scope: Deactivated successfully.
Oct 10 10:20:20 compute-0 podman[282440]: 2025-10-10 10:20:20.634011322 +0000 UTC m=+0.915167539 container died b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:20:20 compute-0 systemd[1]: libpod-b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda.scope: Consumed 1.198s CPU time.
Oct 10 10:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f25aad47899f85b4377e0f0ca3db50cfa597345b4005a9e1017668bbc6b7d2-merged.mount: Deactivated successfully.
Oct 10 10:20:20 compute-0 podman[282440]: 2025-10-10 10:20:20.689272213 +0000 UTC m=+0.970428430 container remove b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:20:20 compute-0 systemd[1]: libpod-conmon-b348b0380f84a96a7555eaa1881188294ead89bba94770e5b224ce80e1ed2bda.scope: Deactivated successfully.
Oct 10 10:20:20 compute-0 sudo[282332]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:20:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:20:20 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:20 compute-0 sudo[282573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:20:20 compute-0 sudo[282573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:20 compute-0 sudo[282573]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:21 compute-0 ceph-mon[73551]: pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:21 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:22 compute-0 nova_compute[261329]: 2025-10-10 10:20:22.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:23 compute-0 nova_compute[261329]: 2025-10-10 10:20:23.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:23 compute-0 ceph-mon[73551]: pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:23.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:25 compute-0 ceph-mon[73551]: pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:25.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:25.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:27.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:20:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:27.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:27 compute-0 ceph-mon[73551]: pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2156092452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:20:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2156092452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:20:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:20:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:27] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 10 10:20:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:27.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:27 compute-0 nova_compute[261329]: 2025-10-10 10:20:27.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:28 compute-0 nova_compute[261329]: 2025-10-10 10:20:28.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 10 10:20:29 compute-0 ceph-mon[73551]: pgmap v1036: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 10 10:20:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:29.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:31 compute-0 ceph-mon[73551]: pgmap v1037: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 10 10:20:31 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:20:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:31.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:32 compute-0 nova_compute[261329]: 2025-10-10 10:20:32.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:33 compute-0 nova_compute[261329]: 2025-10-10 10:20:33.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:33 compute-0 ceph-mon[73551]: pgmap v1038: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:33.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:20:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:35 compute-0 podman[282614]: 2025-10-10 10:20:35.247307254 +0000 UTC m=+0.086240187 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 10:20:35 compute-0 podman[282615]: 2025-10-10 10:20:35.266834749 +0000 UTC m=+0.105113492 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:20:35 compute-0 podman[282616]: 2025-10-10 10:20:35.289900746 +0000 UTC m=+0.125780473 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:20:35 compute-0 ceph-mon[73551]: pgmap v1039: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:20:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:35.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:36 compute-0 ceph-mon[73551]: pgmap v1040: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:37.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:37] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:37] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:37.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:37 compute-0 nova_compute[261329]: 2025-10-10 10:20:37.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:38 compute-0 nova_compute[261329]: 2025-10-10 10:20:38.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:20:38 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:38.890 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:20:38 compute-0 nova_compute[261329]: 2025-10-10 10:20:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:38 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:38.892 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:20:39 compute-0 ceph-mon[73551]: pgmap v1041: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:20:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:39.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:40 compute-0 sudo[282686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:40 compute-0 sudo[282686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:40 compute-0 sudo[282686]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:40 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:40.894 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:41 compute-0 ceph-mon[73551]: pgmap v1042: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:41.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:20:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:42 compute-0 nova_compute[261329]: 2025-10-10 10:20:42.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:43 compute-0 nova_compute[261329]: 2025-10-10 10:20:43.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:43 compute-0 ceph-mon[73551]: pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:43.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:45 compute-0 ceph-mon[73551]: pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:45.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:20:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:20:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:20:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:47.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:47 compute-0 ceph-mon[73551]: pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:47.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:47 compute-0 nova_compute[261329]: 2025-10-10 10:20:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:48 compute-0 nova_compute[261329]: 2025-10-10 10:20:48.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 10 10:20:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2924327785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:49 compute-0 podman[282720]: 2025-10-10 10:20:49.219292525 +0000 UTC m=+0.062875471 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:20:49 compute-0 ceph-mon[73551]: pgmap v1046: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 10 10:20:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:49.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:20:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:49.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:20:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:51 compute-0 ceph-mon[73551]: pgmap v1047: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:51.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:52 compute-0 nova_compute[261329]: 2025-10-10 10:20:52.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:53 compute-0 nova_compute[261329]: 2025-10-10 10:20:53.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:53 compute-0 ceph-mon[73551]: pgmap v1048: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2442865633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 7.7 KiB/s wr, 58 op/s
Oct 10 10:20:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:55 compute-0 ceph-mon[73551]: pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 7.7 KiB/s wr, 58 op/s
Oct 10 10:20:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:55.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:55 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:20:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:56 compute-0 nova_compute[261329]: 2025-10-10 10:20:56.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:56 compute-0 nova_compute[261329]: 2025-10-10 10:20:56.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:20:56 compute-0 nova_compute[261329]: 2025-10-10 10:20:56.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:20:56 compute-0 nova_compute[261329]: 2025-10-10 10:20:56.256 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:20:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:20:57.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:20:57 compute-0 ceph-mon[73551]: pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:20:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:20:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:57 compute-0 nova_compute[261329]: 2025-10-10 10:20:57.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:58 compute-0 nova_compute[261329]: 2025-10-10 10:20:58.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.261 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.262 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.262 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.262 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.262 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:59 compute-0 ceph-mon[73551]: pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3082504773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:59.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:20:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267748783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.760 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.935 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.937 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4568MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.937 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:59 compute-0 nova_compute[261329]: 2025-10-10 10:20:59.937 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.061 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.061 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.085 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4267748783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4120952754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:21:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358297504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-0 sudo[282793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:00 compute-0 sudo[282793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:00 compute-0 sudo[282793]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.573 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.579 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.600 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.602 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:21:00 compute-0 nova_compute[261329]: 2025-10-10 10:21:00.602 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:21:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:01 compute-0 ceph-mon[73551]: pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1358297504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:01 compute-0 nova_compute[261329]: 2025-10-10 10:21:01.599 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:01 compute-0 nova_compute[261329]: 2025-10-10 10:21:01.599 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:01 compute-0 nova_compute[261329]: 2025-10-10 10:21:01.599 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:01 compute-0 nova_compute[261329]: 2025-10-10 10:21:01.599 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:01 compute-0 nova_compute[261329]: 2025-10-10 10:21:01.600 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:21:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:01.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:02 compute-0 nova_compute[261329]: 2025-10-10 10:21:02.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:03 compute-0 nova_compute[261329]: 2025-10-10 10:21:03.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:03 compute-0 ceph-mon[73551]: pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:03.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:04 compute-0 nova_compute[261329]: 2025-10-10 10:21:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:04 compute-0 nova_compute[261329]: 2025-10-10 10:21:04.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:21:04 compute-0 nova_compute[261329]: 2025-10-10 10:21:04.258 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:05 compute-0 ceph-mon[73551]: pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3705335326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:05.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:21:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:21:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:06 compute-0 podman[282826]: 2025-10-10 10:21:06.243277045 +0000 UTC m=+0.078171183 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 10 10:21:06 compute-0 podman[282828]: 2025-10-10 10:21:06.260433726 +0000 UTC m=+0.098484014 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:21:06 compute-0 podman[282827]: 2025-10-10 10:21:06.268285193 +0000 UTC m=+0.101693404 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 10 10:21:06 compute-0 nova_compute[261329]: 2025-10-10 10:21:06.271 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:06 compute-0 nova_compute[261329]: 2025-10-10 10:21:06.272 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:21:06 compute-0 nova_compute[261329]: 2025-10-10 10:21:06.290 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:21:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/127293249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:06 compute-0 ceph-mon[73551]: pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:07.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:21:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:21:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:07.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:07.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:07 compute-0 nova_compute[261329]: 2025-10-10 10:21:07.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:08 compute-0 nova_compute[261329]: 2025-10-10 10:21:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:09 compute-0 ceph-mon[73551]: pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:09.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:11 compute-0 ceph-mon[73551]: pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:11.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:12 compute-0 nova_compute[261329]: 2025-10-10 10:21:12.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:13 compute-0 nova_compute[261329]: 2025-10-10 10:21:13.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:13 compute-0 ceph-mon[73551]: pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:15 compute-0 ceph-mon[73551]: pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.542 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.543 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.566 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:21:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:15.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.694 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.695 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.702 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.703 2 INFO nova.compute.claims [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Claim successful on node compute-0.ctlplane.example.com
Oct 10 10:21:15 compute-0 nova_compute[261329]: 2025-10-10 10:21:15.836 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:21:16
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log', '.nfs', 'backups']
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:21:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:21:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:21:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463243596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.371 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.377 2 DEBUG nova.compute.provider_tree [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.394 2 DEBUG nova.scheduler.client.report [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.418 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.419 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.484 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.485 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.506 2 INFO nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.523 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.723 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.725 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.725 2 INFO nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Creating image(s)
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.755 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.782 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.812 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.815 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:21:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.893 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.893 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.894 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.894 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.920 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:16 compute-0 nova_compute[261329]: 2025-10-10 10:21:16.923 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 9feccadf-731e-4960-8772-bd18adf2908d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.042 2 DEBUG nova.policy [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.201 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 9feccadf-731e-4960-8772-bd18adf2908d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:17.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:17 compute-0 ceph-mon[73551]: pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/463243596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.285 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:21:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:17] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 10 10:21:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:17] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.402 2 DEBUG nova.objects.instance [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9feccadf-731e-4960-8772-bd18adf2908d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.418 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.419 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Ensure instance console log exists: /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.419 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.420 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.420 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:17 compute-0 nova_compute[261329]: 2025-10-10 10:21:17.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:18 compute-0 nova_compute[261329]: 2025-10-10 10:21:18.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:18 compute-0 nova_compute[261329]: 2025-10-10 10:21:18.205 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Successfully created port: 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:21:18 compute-0 ceph-mgr[73845]: [dashboard INFO request] [192.168.122.100:51840] [POST] [200] [0.002s] [4.0B] [10630dab-96dc-4a49-afeb-fcb40ceb8678] /api/prometheus_receiver
Oct 10 10:21:19 compute-0 ceph-mon[73551]: pgmap v1061: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.104 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Successfully updated port: 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.124 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.125 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.125 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:21:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.195 2 DEBUG nova.compute.manager [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.196 2 DEBUG nova.compute.manager [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing instance network info cache due to event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.196 2 DEBUG oslo_concurrency.lockutils [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:21:20 compute-0 podman[283094]: 2025-10-10 10:21:20.251181511 +0000 UTC m=+0.094950212 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:21:20 compute-0 nova_compute[261329]: 2025-10-10 10:21:20.309 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:21:20 compute-0 sudo[283113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:20 compute-0 sudo[283113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:20 compute-0 sudo[283113]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:21 compute-0 sudo[283139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:21:21 compute-0 sudo[283139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:21 compute-0 sudo[283139]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:21 compute-0 sudo[283164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:21:21 compute-0 sudo[283164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: pgmap v1062: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Oct 10 10:21:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:21 compute-0 sudo[283164]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:21:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:21:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:21:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:21 compute-0 sudo[283222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:21:22 compute-0 sudo[283222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:22 compute-0 sudo[283222]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:22 compute-0 sudo[283247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:21:22 compute-0 sudo[283247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.193 2 DEBUG nova.network.neutron [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.220 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.221 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Instance network_info: |[{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.221 2 DEBUG oslo_concurrency.lockutils [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.221 2 DEBUG nova.network.neutron [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.224 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Start _get_guest_xml network_info=[{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encrypted': False, 'encryption_options': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.228 2 WARNING nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.236 2 DEBUG nova.virt.libvirt.host [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.236 2 DEBUG nova.virt.libvirt.host [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.246 2 DEBUG nova.virt.libvirt.host [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.247 2 DEBUG nova.virt.libvirt.host [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.247 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.248 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.248 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.248 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.248 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.249 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.249 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.249 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.249 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.249 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.250 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.250 2 DEBUG nova.virt.hardware [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.252 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:21:22 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.446771813 +0000 UTC m=+0.043752149 container create 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:21:22 compute-0 systemd[1]: Started libpod-conmon-0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e.scope.
Oct 10 10:21:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.518804723 +0000 UTC m=+0.115785089 container init 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.429436288 +0000 UTC m=+0.026416644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.528169827 +0000 UTC m=+0.125150173 container start 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.531860644 +0000 UTC m=+0.128840990 container attach 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:21:22 compute-0 charming_germain[283350]: 167 167
Oct 10 10:21:22 compute-0 systemd[1]: libpod-0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e.scope: Deactivated successfully.
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.535395916 +0000 UTC m=+0.132376262 container died 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c48b007bc6c1915e57f34bd4fbe8d61cb9a7919b81ae572d437c8b89014d234-merged.mount: Deactivated successfully.
Oct 10 10:21:22 compute-0 podman[283332]: 2025-10-10 10:21:22.576734837 +0000 UTC m=+0.173715173 container remove 0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:21:22 compute-0 systemd[1]: libpod-conmon-0d2d61805372f4843cce4c2e85f791d4addea29b86635dd5750b6ab8d4fcc22e.scope: Deactivated successfully.
Oct 10 10:21:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:21:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399503447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.740 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:22 compute-0 podman[283375]: 2025-10-10 10:21:22.762930913 +0000 UTC m=+0.041420136 container create 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.768 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.774 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:22 compute-0 systemd[1]: Started libpod-conmon-5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7.scope.
Oct 10 10:21:22 compute-0 podman[283375]: 2025-10-10 10:21:22.743407097 +0000 UTC m=+0.021896350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:22 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:22 compute-0 podman[283375]: 2025-10-10 10:21:22.864477091 +0000 UTC m=+0.142966334 container init 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 10:21:22 compute-0 podman[283375]: 2025-10-10 10:21:22.873725623 +0000 UTC m=+0.152214846 container start 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:21:22 compute-0 podman[283375]: 2025-10-10 10:21:22.877453601 +0000 UTC m=+0.155942934 container attach 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:21:22 compute-0 nova_compute[261329]: 2025-10-10 10:21:22.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 awesome_meitner[283412]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:21:23 compute-0 awesome_meitner[283412]: --> All data devices are unavailable
Oct 10 10:21:23 compute-0 systemd[1]: libpod-5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7.scope: Deactivated successfully.
Oct 10 10:21:23 compute-0 podman[283375]: 2025-10-10 10:21:23.209950635 +0000 UTC m=+0.488439858 container died 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-131a51ca95f5f39109aa78f4e2da9cef7c273d8a6a10d5063f5d750cd877463d-merged.mount: Deactivated successfully.
Oct 10 10:21:23 compute-0 podman[283375]: 2025-10-10 10:21:23.250357717 +0000 UTC m=+0.528846940 container remove 5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:21:23 compute-0 systemd[1]: libpod-conmon-5a4fea14fac9f53294d73a036681f16bb3620eb4b319636aab9d1dae001160d7.scope: Deactivated successfully.
Oct 10 10:21:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:21:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112813609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.293 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.296 2 DEBUG nova.virt.libvirt.vif [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-277253193',display_name='tempest-TestNetworkBasicOps-server-277253193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-277253193',id=13,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDR0mZDZN9d//oqI4V9JZ/85IDnJ3cnr4f0fUnvUvFDzJBE9lZUpkoN9mxkZbtw19eqdCDiu1u/I/10IPmNJBXSvfa7yQYNp2kp53hQoG9FkIIZc/5ba4JvQPZVEct47dQ==',key_name='tempest-TestNetworkBasicOps-696613839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-528100gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:21:16Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=9feccadf-731e-4960-8772-bd18adf2908d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.297 2 DEBUG nova.network.os_vif_util [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.297 2 DEBUG nova.network.os_vif_util [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.299 2 DEBUG nova.objects.instance [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9feccadf-731e-4960-8772-bd18adf2908d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:21:23 compute-0 sudo[283247]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.316 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <uuid>9feccadf-731e-4960-8772-bd18adf2908d</uuid>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <name>instance-0000000d</name>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <memory>131072</memory>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <vcpu>1</vcpu>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <metadata>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:name>tempest-TestNetworkBasicOps-server-277253193</nova:name>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:creationTime>2025-10-10 10:21:22</nova:creationTime>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:flavor name="m1.nano">
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:memory>128</nova:memory>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:disk>1</nova:disk>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:swap>0</nova:swap>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </nova:flavor>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:owner>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </nova:owner>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <nova:ports>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <nova:port uuid="514cbe9c-a25e-4d45-b1d7-6b207b16f4c5">
Oct 10 10:21:23 compute-0 nova_compute[261329]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         </nova:port>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </nova:ports>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </nova:instance>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </metadata>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <sysinfo type="smbios">
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <system>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="serial">9feccadf-731e-4960-8772-bd18adf2908d</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="uuid">9feccadf-731e-4960-8772-bd18adf2908d</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </system>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </sysinfo>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <os>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <boot dev="hd"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <smbios mode="sysinfo"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </os>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <features>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <acpi/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <apic/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <vmcoreinfo/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </features>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <clock offset="utc">
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <timer name="hpet" present="no"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </clock>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <cpu mode="host-model" match="exact">
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </cpu>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   <devices>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <disk type="network" device="disk">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/9feccadf-731e-4960-8772-bd18adf2908d_disk">
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </source>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <target dev="vda" bus="virtio"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <disk type="network" device="cdrom">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <driver type="raw" cache="none"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <source protocol="rbd" name="vms/9feccadf-731e-4960-8772-bd18adf2908d_disk.config">
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </source>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <auth username="openstack">
Oct 10 10:21:23 compute-0 nova_compute[261329]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       </auth>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <target dev="sda" bus="sata"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </disk>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <interface type="ethernet">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <mac address="fa:16:3e:9b:f6:2a"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <mtu size="1442"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <target dev="tap514cbe9c-a2"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </interface>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <serial type="pty">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <log file="/var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/console.log" append="off"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </serial>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <video>
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <model type="virtio"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </video>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <input type="tablet" bus="usb"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <rng model="virtio">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </rng>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <controller type="usb" index="0"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     <memballoon model="virtio">
Oct 10 10:21:23 compute-0 nova_compute[261329]:       <stats period="10"/>
Oct 10 10:21:23 compute-0 nova_compute[261329]:     </memballoon>
Oct 10 10:21:23 compute-0 nova_compute[261329]:   </devices>
Oct 10 10:21:23 compute-0 nova_compute[261329]: </domain>
Oct 10 10:21:23 compute-0 nova_compute[261329]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.317 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Preparing to wait for external event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.318 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.318 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.318 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.319 2 DEBUG nova.virt.libvirt.vif [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-277253193',display_name='tempest-TestNetworkBasicOps-server-277253193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-277253193',id=13,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDR0mZDZN9d//oqI4V9JZ/85IDnJ3cnr4f0fUnvUvFDzJBE9lZUpkoN9mxkZbtw19eqdCDiu1u/I/10IPmNJBXSvfa7yQYNp2kp53hQoG9FkIIZc/5ba4JvQPZVEct47dQ==',key_name='tempest-TestNetworkBasicOps-696613839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-528100gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:21:16Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=9feccadf-731e-4960-8772-bd18adf2908d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.319 2 DEBUG nova.network.os_vif_util [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.319 2 DEBUG nova.network.os_vif_util [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.320 2 DEBUG os_vif [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514cbe9c-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap514cbe9c-a2, col_values=(('external_ids', {'iface-id': '514cbe9c-a25e-4d45-b1d7-6b207b16f4c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:f6:2a', 'vm-uuid': '9feccadf-731e-4960-8772-bd18adf2908d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 NetworkManager[44849]: <info>  [1760091683.3281] manager: (tap514cbe9c-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.334 2 INFO os_vif [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2')
Oct 10 10:21:23 compute-0 sudo[283463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:21:23 compute-0 sudo[283463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:23 compute-0 sudo[283463]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.383 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.384 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.384 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:9b:f6:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.385 2 INFO nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Using config drive
Oct 10 10:21:23 compute-0 ceph-mon[73551]: pgmap v1063: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3399503447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1112813609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.416 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:23 compute-0 sudo[283490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:21:23 compute-0 sudo[283490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.745 2 INFO nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Creating config drive at /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.749 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtpey8lb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.798078671 +0000 UTC m=+0.037659308 container create cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:21:23 compute-0 systemd[1]: Started libpod-conmon-cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956.scope.
Oct 10 10:21:23 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.876611095 +0000 UTC m=+0.116191782 container init cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.78252069 +0000 UTC m=+0.022101347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.883088828 +0000 UTC m=+0.122669465 container start cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.886401713 +0000 UTC m=+0.125982360 container attach cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.885 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtpey8lb" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:23 compute-0 agitated_goodall[283592]: 167 167
Oct 10 10:21:23 compute-0 systemd[1]: libpod-cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956.scope: Deactivated successfully.
Oct 10 10:21:23 compute-0 conmon[283592]: conmon cbf7366d38d363c3e347 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956.scope/container/memory.events
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.88949303 +0000 UTC m=+0.129073697 container died cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:21:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.919 2 DEBUG nova.storage.rbd_utils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 9feccadf-731e-4960-8772-bd18adf2908d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc2ad17b3da48b06e23a94e2197dd86bc2ca1852c519de2dc0c2a0b7d739a41c-merged.mount: Deactivated successfully.
Oct 10 10:21:23 compute-0 nova_compute[261329]: 2025-10-10 10:21:23.925 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config 9feccadf-731e-4960-8772-bd18adf2908d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:23 compute-0 podman[283573]: 2025-10-10 10:21:23.936619325 +0000 UTC m=+0.176199972 container remove cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:21:23 compute-0 systemd[1]: libpod-conmon-cbf7366d38d363c3e347a36540df7091e8974db98ed2a10ea5aab37509098956.scope: Deactivated successfully.
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.085 2 DEBUG oslo_concurrency.processutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config 9feccadf-731e-4960-8772-bd18adf2908d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.086 2 INFO nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Deleting local config drive /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d/disk.config because it was imported into RBD.
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.110534963 +0000 UTC m=+0.044881384 container create 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:21:24 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.117 2 DEBUG nova.network.neutron [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updated VIF entry in instance network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.117 2 DEBUG nova.network.neutron [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:24 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.138 2 DEBUG oslo_concurrency.lockutils [req-4a3a15b0-f88d-4020-9eb9-e42299ab2303 req-3179a839-a6de-40be-bfa1-3ac9aa74ffd4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:21:24 compute-0 systemd[1]: Started libpod-conmon-237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff.scope.
Oct 10 10:21:24 compute-0 kernel: tap514cbe9c-a2: entered promiscuous mode
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.1714] manager: (tap514cbe9c-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 ovn_controller[153080]: 2025-10-10T10:21:24Z|00067|binding|INFO|Claiming lport 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 for this chassis.
Oct 10 10:21:24 compute-0 ovn_controller[153080]: 2025-10-10T10:21:24Z|00068|binding|INFO|514cbe9c-a25e-4d45-b1d7-6b207b16f4c5: Claiming fa:16:3e:9b:f6:2a 10.100.0.5
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.091493994 +0000 UTC m=+0.025840455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8302716e65387662be7260528e23e5351e3027778dc4b184d0fcb5b6bf1fdb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8302716e65387662be7260528e23e5351e3027778dc4b184d0fcb5b6bf1fdb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8302716e65387662be7260528e23e5351e3027778dc4b184d0fcb5b6bf1fdb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8302716e65387662be7260528e23e5351e3027778dc4b184d0fcb5b6bf1fdb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.214 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:f6:2a 10.100.0.5'], port_security=['fa:16:3e:9b:f6:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9feccadf-731e-4960-8772-bd18adf2908d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77680597-d73c-4099-b692-9a6f8642f03d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ee66c9d-f553-4e88-8f51-3a05ba711a99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a47590-7e08-4336-bed9-af3f99eb6020, chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.217238755 +0000 UTC m=+0.151585186 container init 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.215 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 in datapath 77680597-d73c-4099-b692-9a6f8642f03d bound to our chassis
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.216 162925 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77680597-d73c-4099-b692-9a6f8642f03d
Oct 10 10:21:24 compute-0 systemd-machined[215425]: New machine qemu-5-instance-0000000d.
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.225362621 +0000 UTC m=+0.159709052 container start 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:21:24 compute-0 systemd-udevd[283702]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.228485619 +0000 UTC m=+0.162832060 container attach 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.231 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa418c7-dd39-4155-be46-b4d5fb34509b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.232 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77680597-d1 in ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.234 269344 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77680597-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.234 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[96e6866d-202a-4188-8ffe-139d84fe6aba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.234 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[ef07e85a-8b2d-46e3-b77f-bf8a11d0275b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.2470] device (tap514cbe9c-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.2477] device (tap514cbe9c-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.247 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[34625879-4189-45fd-a1e4-0b36a9d50c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.272 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[432d9f2c-4733-4f83-bc28-b2d696a1f3cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_controller[153080]: 2025-10-10T10:21:24Z|00069|binding|INFO|Setting lport 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 ovn-installed in OVS
Oct 10 10:21:24 compute-0 ovn_controller[153080]: 2025-10-10T10:21:24Z|00070|binding|INFO|Setting lport 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 up in Southbound
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.308 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[6b27a28a-6494-4ec7-a60a-fd1a5b251080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.317 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[943592c1-58be-4269-bbcf-92f616d7e107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.3184] manager: (tap77680597-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.355 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[dc584720-7ba9-4427-9087-f56ce57039d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.359 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[1b63e895-9888-4ea8-9711-df33d5a18708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.3840] device (tap77680597-d0): carrier: link connected
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.392 269423 DEBUG oslo.privsep.daemon [-] privsep: reply[2a835636-9c8c-40c9-be2f-be12209399af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.410 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[19481f6e-fe48-4102-808e-5631404d3212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77680597-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:07:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458795, 'reachable_time': 15174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283736, 'error': None, 'target': 'ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.427 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[69f2b735-72e4-4951-8a56-9abfea715d99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:75c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458795, 'tstamp': 458795}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283738, 'error': None, 'target': 'ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.444 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[097fad10-70b7-4693-9413-5be07e12fbc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77680597-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:07:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458795, 'reachable_time': 15174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283739, 'error': None, 'target': 'ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.473 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[5234cac8-910f-4403-b0c3-61d33e8c932c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]: {
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:     "0": [
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:         {
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "devices": [
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "/dev/loop3"
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             ],
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "lv_name": "ceph_lv0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "lv_size": "21470642176",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "name": "ceph_lv0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "tags": {
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.cluster_name": "ceph",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.crush_device_class": "",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.encrypted": "0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.osd_id": "0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.type": "block",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.vdo": "0",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:                 "ceph.with_tpm": "0"
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             },
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "type": "block",
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:             "vg_name": "ceph_vg0"
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:         }
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]:     ]
Oct 10 10:21:24 compute-0 wizardly_fermi[283690]: }
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.530 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[f83d51cb-fcc8-4979-bb49-a5f0e91b2e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.531 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77680597-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.532 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.532 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77680597-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 NetworkManager[44849]: <info>  [1760091684.5344] manager: (tap77680597-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 10 10:21:24 compute-0 kernel: tap77680597-d0: entered promiscuous mode
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 systemd[1]: libpod-237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff.scope: Deactivated successfully.
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.538 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77680597-d0, col_values=(('external_ids', {'iface-id': '3ce06773-c09d-4a46-8963-067098d3ba08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:24 compute-0 conmon[283690]: conmon 237b871c3b338eae3671 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff.scope/container/memory.events
Oct 10 10:21:24 compute-0 ovn_controller[153080]: 2025-10-10T10:21:24Z|00071|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.539202077 +0000 UTC m=+0.473548518 container died 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.556 162925 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77680597-d73c-4099-b692-9a6f8642f03d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77680597-d73c-4099-b692-9a6f8642f03d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.558 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[3036ab8d-2b3a-4d30-879f-b13d5db5d948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.560 162925 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: global
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     log         /dev/log local0 debug
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     log-tag     haproxy-metadata-proxy-77680597-d73c-4099-b692-9a6f8642f03d
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     user        root
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     group       root
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     maxconn     1024
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     pidfile     /var/lib/neutron/external/pids/77680597-d73c-4099-b692-9a6f8642f03d.pid.haproxy
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     daemon
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: defaults
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     log global
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     mode http
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     option httplog
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     option dontlognull
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     option http-server-close
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     option forwardfor
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     retries                 3
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     timeout http-request    30s
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     timeout connect         30s
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     timeout client          32s
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     timeout server          32s
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     timeout http-keep-alive 30s
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: listen listener
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     bind 169.254.169.254:80
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:     http-request add-header X-OVN-Network-ID 77680597-d73c-4099-b692-9a6f8642f03d
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:21:24 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:24.561 162925 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d', 'env', 'PROCESS_TAG=haproxy-77680597-d73c-4099-b692-9a6f8642f03d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77680597-d73c-4099-b692-9a6f8642f03d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8302716e65387662be7260528e23e5351e3027778dc4b184d0fcb5b6bf1fdb5-merged.mount: Deactivated successfully.
Oct 10 10:21:24 compute-0 podman[283651]: 2025-10-10 10:21:24.581555721 +0000 UTC m=+0.515902142 container remove 237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:21:24 compute-0 systemd[1]: libpod-conmon-237b871c3b338eae3671c7f47d25927d6ee7380a8a2e9e35d0e5757797f08cff.scope: Deactivated successfully.
Oct 10 10:21:24 compute-0 sudo[283490]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.635 2 DEBUG nova.compute.manager [req-fb38c0cf-b3bb-4e90-be1f-9ce8bba8568d req-99a0c91a-e60f-4fbc-bbd7-0f47757d2d43 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.635 2 DEBUG oslo_concurrency.lockutils [req-fb38c0cf-b3bb-4e90-be1f-9ce8bba8568d req-99a0c91a-e60f-4fbc-bbd7-0f47757d2d43 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.636 2 DEBUG oslo_concurrency.lockutils [req-fb38c0cf-b3bb-4e90-be1f-9ce8bba8568d req-99a0c91a-e60f-4fbc-bbd7-0f47757d2d43 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.636 2 DEBUG oslo_concurrency.lockutils [req-fb38c0cf-b3bb-4e90-be1f-9ce8bba8568d req-99a0c91a-e60f-4fbc-bbd7-0f47757d2d43 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:24 compute-0 nova_compute[261329]: 2025-10-10 10:21:24.636 2 DEBUG nova.compute.manager [req-fb38c0cf-b3bb-4e90-be1f-9ce8bba8568d req-99a0c91a-e60f-4fbc-bbd7-0f47757d2d43 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Processing event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:21:24 compute-0 sudo[283776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:21:24 compute-0 sudo[283776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:24 compute-0 sudo[283776]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:24 compute-0 sudo[283820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:21:24 compute-0 sudo[283820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:24 compute-0 podman[283876]: 2025-10-10 10:21:24.924642428 +0000 UTC m=+0.051399869 container create 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 10 10:21:24 compute-0 systemd[1]: Started libpod-conmon-81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1.scope.
Oct 10 10:21:24 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:25 compute-0 podman[283876]: 2025-10-10 10:21:24.89994019 +0000 UTC m=+0.026697651 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0c1abfb717d67c1510152551f91950d746c130e34b26c06ed4c1bf3fa6e341/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:25 compute-0 podman[283876]: 2025-10-10 10:21:25.01422326 +0000 UTC m=+0.140980721 container init 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 10 10:21:25 compute-0 podman[283876]: 2025-10-10 10:21:25.021032585 +0000 UTC m=+0.147790026 container start 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 10 10:21:25 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [NOTICE]   (283918) : New worker (283922) forked
Oct 10 10:21:25 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [NOTICE]   (283918) : Loading success.
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.166653362 +0000 UTC m=+0.053945490 container create f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:21:25 compute-0 systemd[1]: Started libpod-conmon-f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75.scope.
Oct 10 10:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.139800376 +0000 UTC m=+0.027092604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.246688853 +0000 UTC m=+0.133981081 container init f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.25547541 +0000 UTC m=+0.142767578 container start f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.259688783 +0000 UTC m=+0.146981011 container attach f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:21:25 compute-0 adoring_agnesi[283958]: 167 167
Oct 10 10:21:25 compute-0 systemd[1]: libpod-f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75.scope: Deactivated successfully.
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.261794869 +0000 UTC m=+0.149087007 container died f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.264 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.265 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091685.2635386, 9feccadf-731e-4960-8772-bd18adf2908d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.266 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] VM Started (Lifecycle Event)
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.272 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.276 2 INFO nova.virt.libvirt.driver [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Instance spawned successfully.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.276 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaea7652b2886e0ed791ddcb929e3777f252c7e920b17f9a288a082dbd61e761-merged.mount: Deactivated successfully.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.294 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.307 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:21:25 compute-0 podman[283942]: 2025-10-10 10:21:25.307411707 +0000 UTC m=+0.194703845 container remove f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_agnesi, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.314 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.315 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.316 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.317 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.318 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.320 2 DEBUG nova.virt.libvirt.driver [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:21:25 compute-0 systemd[1]: libpod-conmon-f8352d3dba4a8c63d6781e5eac0cb2ac59925966cab6f1e597c7191deb303d75.scope: Deactivated successfully.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.345 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.346 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091685.2672887, 9feccadf-731e-4960-8772-bd18adf2908d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.346 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] VM Paused (Lifecycle Event)
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.374 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.377 2 DEBUG nova.virt.driver [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] Emitting event <LifecycleEvent: 1760091685.2725024, 9feccadf-731e-4960-8772-bd18adf2908d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.378 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] VM Resumed (Lifecycle Event)
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.384 2 INFO nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Took 8.66 seconds to spawn the instance on the hypervisor.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.384 2 DEBUG nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.395 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.399 2 DEBUG nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:21:25 compute-0 ceph-mon[73551]: pgmap v1064: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.426 2 INFO nova.compute.manager [None req-fadbac0f-488f-48be-b1c4-4525a5e6c8af - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.456 2 INFO nova.compute.manager [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Took 9.84 seconds to build instance.
Oct 10 10:21:25 compute-0 nova_compute[261329]: 2025-10-10 10:21:25.471 2 DEBUG oslo_concurrency.lockutils [None req-946ed5c3-a2d2-4436-bafb-1933b005fcb4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:25 compute-0 podman[283982]: 2025-10-10 10:21:25.482666057 +0000 UTC m=+0.038264006 container create 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:21:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:25.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:25 compute-0 systemd[1]: Started libpod-conmon-59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44.scope.
Oct 10 10:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad9d3372c56f54b6667f4a4e46e9a43893aecd0ddd8071d6afbce6ca951c7b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad9d3372c56f54b6667f4a4e46e9a43893aecd0ddd8071d6afbce6ca951c7b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad9d3372c56f54b6667f4a4e46e9a43893aecd0ddd8071d6afbce6ca951c7b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad9d3372c56f54b6667f4a4e46e9a43893aecd0ddd8071d6afbce6ca951c7b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:21:25 compute-0 podman[283982]: 2025-10-10 10:21:25.564514975 +0000 UTC m=+0.120112944 container init 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:21:25 compute-0 podman[283982]: 2025-10-10 10:21:25.467902212 +0000 UTC m=+0.023500181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:21:25 compute-0 podman[283982]: 2025-10-10 10:21:25.572615801 +0000 UTC m=+0.128213750 container start 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:21:25 compute-0 podman[283982]: 2025-10-10 10:21:25.586826948 +0000 UTC m=+0.142424917 container attach 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:21:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:21:26 compute-0 lvm[284074]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:21:26 compute-0 lvm[284074]: VG ceph_vg0 finished
Oct 10 10:21:26 compute-0 optimistic_cori[283998]: {}
Oct 10 10:21:26 compute-0 systemd[1]: libpod-59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44.scope: Deactivated successfully.
Oct 10 10:21:26 compute-0 systemd[1]: libpod-59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44.scope: Consumed 1.058s CPU time.
Oct 10 10:21:26 compute-0 podman[284077]: 2025-10-10 10:21:26.325554679 +0000 UTC m=+0.024682689 container died 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 10:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cad9d3372c56f54b6667f4a4e46e9a43893aecd0ddd8071d6afbce6ca951c7b8-merged.mount: Deactivated successfully.
Oct 10 10:21:26 compute-0 podman[284077]: 2025-10-10 10:21:26.370442293 +0000 UTC m=+0.069570283 container remove 59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_cori, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:21:26 compute-0 systemd[1]: libpod-conmon-59ea2cd257e0b4505a7bf7463b60632a8f923769095b7a1d5f185bfba3133c44.scope: Deactivated successfully.
Oct 10 10:21:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:21:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:21:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:21:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:21:26 compute-0 sudo[283820]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:21:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:21:26 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:26 compute-0 sudo[284092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:21:26 compute-0 sudo[284092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:26 compute-0 sudo[284092]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.735 2 DEBUG nova.compute.manager [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.737 2 DEBUG oslo_concurrency.lockutils [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.738 2 DEBUG oslo_concurrency.lockutils [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.739 2 DEBUG oslo_concurrency.lockutils [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.739 2 DEBUG nova.compute.manager [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] No waiting events found dispatching network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:21:26 compute-0 nova_compute[261329]: 2025-10-10 10:21:26.740 2 WARNING nova.compute.manager [req-4baa2a25-83e5-4938-baac-c20d376282d7 req-31c40969-ba91-4ab3-a242-d9f9ad0f693e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received unexpected event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 for instance with vm_state active and task_state None.
Oct 10 10:21:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:27.209Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:21:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:27.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 10 10:21:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:27] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 10 10:21:27 compute-0 ceph-mon[73551]: pgmap v1065: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:21:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:21:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:21:27 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:27 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:27.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:27 compute-0 nova_compute[261329]: 2025-10-10 10:21:27.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:27 compute-0 NetworkManager[44849]: <info>  [1760091687.8001] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 10 10:21:27 compute-0 NetworkManager[44849]: <info>  [1760091687.8046] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 10 10:21:27 compute-0 ovn_controller[153080]: 2025-10-10T10:21:27Z|00072|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:27 compute-0 nova_compute[261329]: 2025-10-10 10:21:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:27 compute-0 ovn_controller[153080]: 2025-10-10T10:21:27Z|00073|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:27 compute-0 nova_compute[261329]: 2025-10-10 10:21:27.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:28 compute-0 ceph-mon[73551]: pgmap v1066: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.841 2 DEBUG nova.compute.manager [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.841 2 DEBUG nova.compute.manager [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing instance network info cache due to event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.841 2 DEBUG oslo_concurrency.lockutils [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.842 2 DEBUG oslo_concurrency.lockutils [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:21:28 compute-0 nova_compute[261329]: 2025-10-10 10:21:28.842 2 DEBUG nova.network.neutron [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:21:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:28.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:29.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:30 compute-0 ceph-mon[73551]: pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:21:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:31.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:33 compute-0 ceph-mon[73551]: pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:33 compute-0 nova_compute[261329]: 2025-10-10 10:21:33.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:33 compute-0 nova_compute[261329]: 2025-10-10 10:21:33.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:33 compute-0 nova_compute[261329]: 2025-10-10 10:21:33.354 2 DEBUG nova.network.neutron [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updated VIF entry in instance network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:21:33 compute-0 nova_compute[261329]: 2025-10-10 10:21:33.355 2 DEBUG nova.network.neutron [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:33 compute-0 nova_compute[261329]: 2025-10-10 10:21:33.382 2 DEBUG oslo_concurrency.lockutils [req-1542fa36-0479-4201-82c8-abac0b353b05 req-91aef7ec-c882-4833-ade9-546faefe4fd0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:21:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:33.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.033345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694033420, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 251, "total_data_size": 4247050, "memory_usage": 4318048, "flush_reason": "Manual Compaction"}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694060075, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4092731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29484, "largest_seqno": 31616, "table_properties": {"data_size": 4083132, "index_size": 6029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20048, "raw_average_key_size": 20, "raw_value_size": 4063987, "raw_average_value_size": 4155, "num_data_blocks": 259, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091492, "oldest_key_time": 1760091492, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26826 microseconds, and 8107 cpu microseconds.
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060169) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4092731 bytes OK
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060208) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062427) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062476) EVENT_LOG_v1 {"time_micros": 1760091694062464, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062512) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4238384, prev total WAL file size 4238384, number of live WAL files 2.
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.063895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3996KB)], [65(11MB)]
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694063959, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16141334, "oldest_snapshot_seqno": -1}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6208 keys, 14032894 bytes, temperature: kUnknown
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694150007, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14032894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992311, "index_size": 23961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 158987, "raw_average_key_size": 25, "raw_value_size": 13881453, "raw_average_value_size": 2236, "num_data_blocks": 964, "num_entries": 6208, "num_filter_entries": 6208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.150248) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14032894 bytes
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.151247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 163.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.5 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 6729, records dropped: 521 output_compression: NoCompression
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.151262) EVENT_LOG_v1 {"time_micros": 1760091694151255, "job": 36, "event": "compaction_finished", "compaction_time_micros": 86115, "compaction_time_cpu_micros": 35060, "output_level": 6, "num_output_files": 1, "total_output_size": 14032894, "num_input_records": 6729, "num_output_records": 6208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694152057, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694154561, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.063769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.154627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.154636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.154639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.154642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:21:34.154645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:35 compute-0 ceph-mon[73551]: pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Oct 10 10:21:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:21:37 compute-0 ovn_controller[153080]: 2025-10-10T10:21:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:f6:2a 10.100.0.5
Oct 10 10:21:37 compute-0 ovn_controller[153080]: 2025-10-10T10:21:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:f6:2a 10.100.0.5
Oct 10 10:21:37 compute-0 ceph-mon[73551]: pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:21:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:37.211Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:21:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:37.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:21:37 compute-0 podman[284132]: 2025-10-10 10:21:37.215037526 +0000 UTC m=+0.058845485 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:21:37 compute-0 podman[284131]: 2025-10-10 10:21:37.223857814 +0000 UTC m=+0.068659633 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:21:37 compute-0 podman[284133]: 2025-10-10 10:21:37.243005277 +0000 UTC m=+0.084897125 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 10 10:21:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:37] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 10 10:21:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:37] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 10 10:21:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:37.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:21:38 compute-0 nova_compute[261329]: 2025-10-10 10:21:38.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:38 compute-0 nova_compute[261329]: 2025-10-10 10:21:38.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:39 compute-0 ceph-mon[73551]: pgmap v1071: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:21:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:39.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:40 compute-0 sudo[284196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:40 compute-0 sudo[284196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:40 compute-0 sudo[284196]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:41 compute-0 ceph-mon[73551]: pgmap v1072: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:41.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:41.909 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:41.910 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:41.910 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:42 compute-0 nova_compute[261329]: 2025-10-10 10:21:42.967 2 INFO nova.compute.manager [None req-b3362502-142e-4071-abd7-77b05a88253e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Get console output
Oct 10 10:21:42 compute-0 nova_compute[261329]: 2025-10-10 10:21:42.974 2054 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:21:43 compute-0 nova_compute[261329]: 2025-10-10 10:21:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:43 compute-0 ceph-mon[73551]: pgmap v1073: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:43 compute-0 nova_compute[261329]: 2025-10-10 10:21:43.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:43.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:43 compute-0 ovn_controller[153080]: 2025-10-10T10:21:43Z|00074|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:43 compute-0 nova_compute[261329]: 2025-10-10 10:21:43.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:43 compute-0 ovn_controller[153080]: 2025-10-10T10:21:43Z|00075|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:43 compute-0 nova_compute[261329]: 2025-10-10 10:21:43.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:21:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:44 compute-0 nova_compute[261329]: 2025-10-10 10:21:44.783 2 INFO nova.compute.manager [None req-7addb2ec-5920-4ac2-b651-108cca7e8b2b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Get console output
Oct 10 10:21:44 compute-0 nova_compute[261329]: 2025-10-10 10:21:44.788 2054 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:21:45 compute-0 ceph-mon[73551]: pgmap v1074: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:21:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:45.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:45 compute-0 nova_compute[261329]: 2025-10-10 10:21:45.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:45 compute-0 NetworkManager[44849]: <info>  [1760091705.5589] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 10 10:21:45 compute-0 NetworkManager[44849]: <info>  [1760091705.5599] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 10 10:21:45 compute-0 ovn_controller[153080]: 2025-10-10T10:21:45Z|00076|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:45 compute-0 ovn_controller[153080]: 2025-10-10T10:21:45Z|00077|binding|INFO|Releasing lport 3ce06773-c09d-4a46-8963-067098d3ba08 from this chassis (sb_readonly=0)
Oct 10 10:21:45 compute-0 nova_compute[261329]: 2025-10-10 10:21:45.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:45 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:45.836 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:21:45 compute-0 nova_compute[261329]: 2025-10-10 10:21:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:45 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:45.837 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:21:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:21:45 compute-0 nova_compute[261329]: 2025-10-10 10:21:45.925 2 INFO nova.compute.manager [None req-189d4327-123f-497b-b6e0-c891b0928146 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Get console output
Oct 10 10:21:45 compute-0 nova_compute[261329]: 2025-10-10 10:21:45.929 2054 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:21:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:21:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:21:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.682 2 DEBUG nova.compute.manager [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.682 2 DEBUG nova.compute.manager [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing instance network info cache due to event network-changed-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.682 2 DEBUG oslo_concurrency.lockutils [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.682 2 DEBUG oslo_concurrency.lockutils [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.683 2 DEBUG nova.network.neutron [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Refreshing network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.738 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.739 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.740 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.741 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.741 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.743 2 INFO nova.compute.manager [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Terminating instance
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.745 2 DEBUG nova.compute.manager [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:21:46 compute-0 kernel: tap514cbe9c-a2 (unregistering): left promiscuous mode
Oct 10 10:21:46 compute-0 NetworkManager[44849]: <info>  [1760091706.8076] device (tap514cbe9c-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:21:46 compute-0 ovn_controller[153080]: 2025-10-10T10:21:46Z|00078|binding|INFO|Releasing lport 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 from this chassis (sb_readonly=0)
Oct 10 10:21:46 compute-0 ovn_controller[153080]: 2025-10-10T10:21:46Z|00079|binding|INFO|Setting lport 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 down in Southbound
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:46 compute-0 ovn_controller[153080]: 2025-10-10T10:21:46Z|00080|binding|INFO|Removing iface tap514cbe9c-a2 ovn-installed in OVS
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:46 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:46.824 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:f6:2a 10.100.0.5'], port_security=['fa:16:3e:9b:f6:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9feccadf-731e-4960-8772-bd18adf2908d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77680597-d73c-4099-b692-9a6f8642f03d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ee66c9d-f553-4e88-8f51-3a05ba711a99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a47590-7e08-4336-bed9-af3f99eb6020, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>], logical_port=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcd217618b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:21:46 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:46.825 162925 INFO neutron.agent.ovn.metadata.agent [-] Port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 in datapath 77680597-d73c-4099-b692-9a6f8642f03d unbound from our chassis
Oct 10 10:21:46 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:46.825 162925 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77680597-d73c-4099-b692-9a6f8642f03d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:21:46 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:46.826 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[31906566-d4d5-4f2a-bd26-e6a08a053c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:46 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:46.827 162925 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d namespace which is not needed anymore
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 10 10:21:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 12.916s CPU time.
Oct 10 10:21:46 compute-0 systemd-machined[215425]: Machine qemu-5-instance-0000000d terminated.
Oct 10 10:21:46 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [NOTICE]   (283918) : haproxy version is 2.8.14-c23fe91
Oct 10 10:21:46 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [NOTICE]   (283918) : path to executable is /usr/sbin/haproxy
Oct 10 10:21:46 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [WARNING]  (283918) : Exiting Master process...
Oct 10 10:21:46 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [ALERT]    (283918) : Current worker (283922) exited with code 143 (Terminated)
Oct 10 10:21:46 compute-0 neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d[283908]: [WARNING]  (283918) : All workers exited. Exiting... (0)
Oct 10 10:21:46 compute-0 systemd[1]: libpod-81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1.scope: Deactivated successfully.
Oct 10 10:21:46 compute-0 podman[284254]: 2025-10-10 10:21:46.960688942 +0000 UTC m=+0.043221232 container died 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.981 2 INFO nova.virt.libvirt.driver [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Instance destroyed successfully.
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.982 2 DEBUG nova.objects.instance [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 9feccadf-731e-4960-8772-bd18adf2908d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1-userdata-shm.mount: Deactivated successfully.
Oct 10 10:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f0c1abfb717d67c1510152551f91950d746c130e34b26c06ed4c1bf3fa6e341-merged.mount: Deactivated successfully.
Oct 10 10:21:46 compute-0 podman[284254]: 2025-10-10 10:21:46.995171449 +0000 UTC m=+0.077703739 container cleanup 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.997 2 DEBUG nova.virt.libvirt.vif [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-277253193',display_name='tempest-TestNetworkBasicOps-server-277253193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-277253193',id=13,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDR0mZDZN9d//oqI4V9JZ/85IDnJ3cnr4f0fUnvUvFDzJBE9lZUpkoN9mxkZbtw19eqdCDiu1u/I/10IPmNJBXSvfa7yQYNp2kp53hQoG9FkIIZc/5ba4JvQPZVEct47dQ==',key_name='tempest-TestNetworkBasicOps-696613839',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:21:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-528100gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:21:25Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=9feccadf-731e-4960-8772-bd18adf2908d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.998 2 DEBUG nova.network.os_vif_util [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:21:46 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.999 2 DEBUG nova.network.os_vif_util [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:46.999 2 DEBUG os_vif [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514cbe9c-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.006 2 INFO os_vif [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:f6:2a,bridge_name='br-int',has_traffic_filtering=True,id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5,network=Network(77680597-d73c-4099-b692-9a6f8642f03d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514cbe9c-a2')
Oct 10 10:21:47 compute-0 systemd[1]: libpod-conmon-81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1.scope: Deactivated successfully.
Oct 10 10:21:47 compute-0 podman[284294]: 2025-10-10 10:21:47.066386201 +0000 UTC m=+0.044483262 container remove 81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.072 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d5167b-3571-402a-9d24-c07c8d6673c2]: (4, ('Fri Oct 10 10:21:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d (81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1)\n81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1\nFri Oct 10 10:21:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d (81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1)\n81e2703373bca4407e5fcb6e42d302c196f4c7c24662bd1643fb459a81a8b4c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.074 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[39859867-5b9d-4468-8fa3-d8db7cfbd26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.075 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77680597-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:47 compute-0 kernel: tap77680597-d0: left promiscuous mode
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.096 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[05f7a2ce-df24-4d77-afcb-55b587b189eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.124 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[bba1d7aa-92c0-4a42-bc6e-34239c59359c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.126 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[2aabc91c-6d99-4b78-bebf-91e6684b487c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ceph-mon[73551]: pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:21:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.143 269344 DEBUG oslo.privsep.daemon [-] privsep: reply[c5477756-2cbd-42fd-97a0-cb78363e1861]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458787, 'reachable_time': 44026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284328, 'error': None, 'target': 'ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d77680597\x2dd73c\x2d4099\x2db692\x2d9a6f8642f03d.mount: Deactivated successfully.
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.148 163038 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77680597-d73c-4099-b692-9a6f8642f03d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:21:47 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:47.148 163038 DEBUG oslo.privsep.daemon [-] privsep: reply[726fdd5f-96c0-46e7-95b4-0e05aa9630c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:21:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:47.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:21:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.415 2 INFO nova.virt.libvirt.driver [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Deleting instance files /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d_del
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.416 2 INFO nova.virt.libvirt.driver [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Deletion of /var/lib/nova/instances/9feccadf-731e-4960-8772-bd18adf2908d_del complete
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.477 2 INFO nova.compute.manager [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Took 0.73 seconds to destroy the instance on the hypervisor.
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.478 2 DEBUG oslo.service.loopingcall [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.478 2 DEBUG nova.compute.manager [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:21:47 compute-0 nova_compute[261329]: 2025-10-10 10:21:47.478 2 DEBUG nova.network.neutron [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:21:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:47.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.042 2 DEBUG nova.network.neutron [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.058 2 DEBUG nova.network.neutron [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updated VIF entry in instance network info cache for port 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.058 2 DEBUG nova.network.neutron [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [{"id": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "address": "fa:16:3e:9b:f6:2a", "network": {"id": "77680597-d73c-4099-b692-9a6f8642f03d", "bridge": "br-int", "label": "tempest-network-smoke--1929404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514cbe9c-a2", "ovs_interfaceid": "514cbe9c-a25e-4d45-b1d7-6b207b16f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.060 2 INFO nova.compute.manager [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Took 0.58 seconds to deallocate network for instance.
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.083 2 DEBUG oslo_concurrency.lockutils [req-fccc1903-9380-4458-af33-671f56b60376 req-1261da45-13eb-4cb9-95f9-65200bd08243 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-9feccadf-731e-4960-8772-bd18adf2908d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.102 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.103 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.114 2 DEBUG nova.compute.manager [req-771ddfab-6beb-47d1-98ee-5cb3663e3bd8 req-d7b5a621-8839-4f4c-ac9c-8c901a629f3c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-vif-deleted-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.114 2 INFO nova.compute.manager [req-771ddfab-6beb-47d1-98ee-5cb3663e3bd8 req-d7b5a621-8839-4f4c-ac9c-8c901a629f3c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Neutron deleted interface 514cbe9c-a25e-4d45-b1d7-6b207b16f4c5; detaching it from the instance and deleting it from the info cache
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.114 2 DEBUG nova.network.neutron [req-771ddfab-6beb-47d1-98ee-5cb3663e3bd8 req-d7b5a621-8839-4f4c-ac9c-8c901a629f3c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.148 2 DEBUG nova.compute.manager [req-771ddfab-6beb-47d1-98ee-5cb3663e3bd8 req-d7b5a621-8839-4f4c-ac9c-8c901a629f3c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Detach interface failed, port_id=514cbe9c-a25e-4d45-b1d7-6b207b16f4c5, reason: Instance 9feccadf-731e-4960-8772-bd18adf2908d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.159 2 DEBUG oslo_concurrency.processutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:21:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991923566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.587 2 DEBUG oslo_concurrency.processutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.595 2 DEBUG nova.compute.provider_tree [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.612 2 DEBUG nova.scheduler.client.report [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.637 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.660 2 INFO nova.scheduler.client.report [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 9feccadf-731e-4960-8772-bd18adf2908d
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.719 2 DEBUG oslo_concurrency.lockutils [None req-8d9347bc-5a63-4b8f-99b4-23421e27450e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.751 2 DEBUG nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-vif-unplugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.751 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.751 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.752 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.752 2 DEBUG nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] No waiting events found dispatching network-vif-unplugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.752 2 WARNING nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received unexpected event network-vif-unplugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 for instance with vm_state deleted and task_state None.
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.752 2 DEBUG nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.753 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "9feccadf-731e-4960-8772-bd18adf2908d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.753 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.753 2 DEBUG oslo_concurrency.lockutils [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "9feccadf-731e-4960-8772-bd18adf2908d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.753 2 DEBUG nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] No waiting events found dispatching network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:21:48 compute-0 nova_compute[261329]: 2025-10-10 10:21:48.753 2 WARNING nova.compute.manager [req-8babd03f-3839-45b2-ac5d-4a99b7540fc7 req-09aeaa90-5d8e-4e39-b37a-07cd5eefd549 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Received unexpected event network-vif-plugged-514cbe9c-a25e-4d45-b1d7-6b207b16f4c5 for instance with vm_state deleted and task_state None.
Oct 10 10:21:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:48.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:49 compute-0 ceph-mon[73551]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:21:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/991923566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:49.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:51 compute-0 ceph-mon[73551]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:51 compute-0 podman[284356]: 2025-10-10 10:21:51.217118005 +0000 UTC m=+0.065173174 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:21:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:51.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:52 compute-0 nova_compute[261329]: 2025-10-10 10:21:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:52 compute-0 nova_compute[261329]: 2025-10-10 10:21:52.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:52 compute-0 nova_compute[261329]: 2025-10-10 10:21:52.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:52 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:21:52.838 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:53 compute-0 nova_compute[261329]: 2025-10-10 10:21:53.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:53 compute-0 ceph-mon[73551]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 52 KiB/s wr, 41 op/s
Oct 10 10:21:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:55 compute-0 ceph-mon[73551]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 52 KiB/s wr, 41 op/s
Oct 10 10:21:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:56 compute-0 nova_compute[261329]: 2025-10-10 10:21:56.256 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:56 compute-0 nova_compute[261329]: 2025-10-10 10:21:56.256 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:21:56 compute-0 nova_compute[261329]: 2025-10-10 10:21:56.256 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:21:56 compute-0 nova_compute[261329]: 2025-10-10 10:21:56.279 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:21:57 compute-0 nova_compute[261329]: 2025-10-10 10:21:57.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:57 compute-0 ceph-mon[73551]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:57.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:21:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:21:57] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 10 10:21:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:57.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:58 compute-0 nova_compute[261329]: 2025-10-10 10:21:58.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:21:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:21:59 compute-0 ceph-mon[73551]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:59 compute-0 nova_compute[261329]: 2025-10-10 10:21:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:59 compute-0 nova_compute[261329]: 2025-10-10 10:21:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:59 compute-0 nova_compute[261329]: 2025-10-10 10:21:59.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:21:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:59.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:21:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:21:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 596 B/s wr, 11 op/s
Oct 10 10:22:00 compute-0 nova_compute[261329]: 2025-10-10 10:22:00.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4151122925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:00 compute-0 sudo[284385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:00 compute-0 sudo[284385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:00 compute-0 sudo[284385]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.232 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:01 compute-0 ceph-mon[73551]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 596 B/s wr, 11 op/s
Oct 10 10:22:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2003623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.256 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.256 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.256 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.256 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.257 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:22:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:22:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:22:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212575521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.697 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:22:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:22:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.893 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.895 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.895 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.895 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.964 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.964 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.977 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091706.97684, 9feccadf-731e-4960-8772-bd18adf2908d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.978 2 INFO nova.compute.manager [-] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] VM Stopped (Lifecycle Event)
Oct 10 10:22:01 compute-0 nova_compute[261329]: 2025-10-10 10:22:01.980 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing inventories for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.002 2 DEBUG nova.compute.manager [None req-810456a5-b864-492e-adff-d589d8c4f3ff - - - - - -] [instance: 9feccadf-731e-4960-8772-bd18adf2908d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.012 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating ProviderTree inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.013 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.036 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing aggregate associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.062 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing trait associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.088 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:22:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2212575521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:22:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094992256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.546 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.555 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.578 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.609 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:22:02 compute-0 nova_compute[261329]: 2025-10-10 10:22:02.609 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:03 compute-0 nova_compute[261329]: 2025-10-10 10:22:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:03 compute-0 ceph-mon[73551]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2094992256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:03.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:03 compute-0 nova_compute[261329]: 2025-10-10 10:22:03.612 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:03 compute-0 nova_compute[261329]: 2025-10-10 10:22:03.612 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:03 compute-0 nova_compute[261329]: 2025-10-10 10:22:03.612 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:22:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:03.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:04 compute-0 nova_compute[261329]: 2025-10-10 10:22:04.233 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:05 compute-0 ceph-mon[73551]: pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:07 compute-0 nova_compute[261329]: 2025-10-10 10:22:07.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:07.213Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:22:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:07.214Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:22:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:07.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:07 compute-0 ceph-mon[73551]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1956938146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:22:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:22:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:07.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:08 compute-0 nova_compute[261329]: 2025-10-10 10:22:08.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:08 compute-0 podman[284463]: 2025-10-10 10:22:08.246436093 +0000 UTC m=+0.081071595 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 10 10:22:08 compute-0 podman[284464]: 2025-10-10 10:22:08.259410902 +0000 UTC m=+0.093205517 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 10 10:22:08 compute-0 podman[284465]: 2025-10-10 10:22:08.293105043 +0000 UTC m=+0.116566403 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:22:08 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3246728595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:08.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:09 compute-0 ceph-mon[73551]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:09.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:11 compute-0 ceph-mon[73551]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:11.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:12 compute-0 nova_compute[261329]: 2025-10-10 10:22:12.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:13 compute-0 nova_compute[261329]: 2025-10-10 10:22:13.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:13 compute-0 ceph-mon[73551]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:13.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:15 compute-0 ceph-mon[73551]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:22:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:15.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:22:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:22:16
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', '.nfs', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:22:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:22:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:22:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:22:17 compute-0 nova_compute[261329]: 2025-10-10 10:22:17.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:17.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:22:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:22:17 compute-0 ceph-mon[73551]: pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:17.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:18 compute-0 nova_compute[261329]: 2025-10-10 10:22:18.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:19 compute-0 ceph-mon[73551]: pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:19.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:19.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:20 compute-0 ceph-mon[73551]: pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:21 compute-0 sudo[284537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:21 compute-0 sudo[284537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:21 compute-0 sudo[284537]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:21.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:21.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:22 compute-0 nova_compute[261329]: 2025-10-10 10:22:22.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:22 compute-0 podman[284563]: 2025-10-10 10:22:22.235363738 +0000 UTC m=+0.069324144 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:22:23 compute-0 ceph-mon[73551]: pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:23 compute-0 nova_compute[261329]: 2025-10-10 10:22:23.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:23 compute-0 ovn_controller[153080]: 2025-10-10T10:22:23Z|00081|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct 10 10:22:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:22:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:23.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:22:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000031s ======
Oct 10 10:22:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:23.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct 10 10:22:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:25 compute-0 ceph-mon[73551]: pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:25.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:26 compute-0 sudo[284588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:22:26 compute-0 sudo[284588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:26 compute-0 sudo[284588]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:26 compute-0 sudo[284613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:22:26 compute-0 sudo[284613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/4103532067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:22:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/4103532067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:22:27 compute-0 nova_compute[261329]: 2025-10-10 10:22:27.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:27.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:22:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:22:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:22:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:22:27 compute-0 sudo[284613]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:22:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:22:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:22:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:22:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:27 compute-0 sudo[284670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:22:27 compute-0 sudo[284670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:27 compute-0 sudo[284670]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:27 compute-0 sudo[284695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:22:27 compute-0 sudo[284695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:27.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:22:28 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:28 compute-0 nova_compute[261329]: 2025-10-10 10:22:28.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.161751813 +0000 UTC m=+0.038441811 container create 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:22:28 compute-0 systemd[1]: Started libpod-conmon-10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a.scope.
Oct 10 10:22:28 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.221895538 +0000 UTC m=+0.098585546 container init 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.227880007 +0000 UTC m=+0.104570005 container start 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.230762598 +0000 UTC m=+0.107452616 container attach 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 10:22:28 compute-0 nervous_agnesi[284776]: 167 167
Oct 10 10:22:28 compute-0 systemd[1]: libpod-10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a.scope: Deactivated successfully.
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.233752531 +0000 UTC m=+0.110442539 container died 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.144612863 +0000 UTC m=+0.021302881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6e870901e658b90a4b4194a71f5ec1a01c9a68c1c43c4e108b28d7e3052c76-merged.mount: Deactivated successfully.
Oct 10 10:22:28 compute-0 podman[284758]: 2025-10-10 10:22:28.272434301 +0000 UTC m=+0.149124309 container remove 10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_agnesi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:22:28 compute-0 systemd[1]: libpod-conmon-10cf28d2d1857d8d48e238e05d6fee1de9c6a8a36ecd955731c200e17eecdf4a.scope: Deactivated successfully.
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.424868512 +0000 UTC m=+0.041091235 container create e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:22:28 compute-0 systemd[1]: Started libpod-conmon-e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f.scope.
Oct 10 10:22:28 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.408571339 +0000 UTC m=+0.024794072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.51305747 +0000 UTC m=+0.129280223 container init e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.519885075 +0000 UTC m=+0.136107798 container start e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.522677143 +0000 UTC m=+0.138899886 container attach e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:28 compute-0 pedantic_mestorf[284814]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:22:28 compute-0 pedantic_mestorf[284814]: --> All data devices are unavailable
Oct 10 10:22:28 compute-0 systemd[1]: libpod-e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f.scope: Deactivated successfully.
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.883337985 +0000 UTC m=+0.499560708 container died e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2dc38e477e937599c510fb6b08c666daeb39026aed6de60275e0bbef127a55c-merged.mount: Deactivated successfully.
Oct 10 10:22:28 compute-0 podman[284798]: 2025-10-10 10:22:28.925738389 +0000 UTC m=+0.541961152 container remove e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:22:28 compute-0 systemd[1]: libpod-conmon-e300bf7c64f7670becd7f2e0246c5f452b3806c1ce384a9b521b74a5f0f2012f.scope: Deactivated successfully.
Oct 10 10:22:28 compute-0 sudo[284695]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:29 compute-0 sudo[284844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:22:29 compute-0 sudo[284844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:29 compute-0 sudo[284844]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:29 compute-0 ceph-mon[73551]: pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:29 compute-0 sudo[284869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:22:29 compute-0 sudo[284869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.458701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749458739, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 716, "num_deletes": 250, "total_data_size": 969813, "memory_usage": 982712, "flush_reason": "Manual Compaction"}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749467241, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 956064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31617, "largest_seqno": 32332, "table_properties": {"data_size": 952472, "index_size": 1436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7475, "raw_average_key_size": 17, "raw_value_size": 945175, "raw_average_value_size": 2172, "num_data_blocks": 64, "num_entries": 435, "num_filter_entries": 435, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091695, "oldest_key_time": 1760091695, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 8567 microseconds, and 2997 cpu microseconds.
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.467269) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 956064 bytes OK
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.467283) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.469558) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.469570) EVENT_LOG_v1 {"time_micros": 1760091749469566, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.469584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 966195, prev total WAL file size 966195, number of live WAL files 2.
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.470010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(933KB)], [68(13MB)]
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749470079, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 14988958, "oldest_snapshot_seqno": -1}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6131 keys, 13814705 bytes, temperature: kUnknown
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749559865, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 13814705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13774479, "index_size": 23796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 159097, "raw_average_key_size": 25, "raw_value_size": 13664677, "raw_average_value_size": 2228, "num_data_blocks": 942, "num_entries": 6131, "num_filter_entries": 6131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560197) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 13814705 bytes
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.562041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.6 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(30.1) write-amplify(14.4) OK, records in: 6643, records dropped: 512 output_compression: NoCompression
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.562056) EVENT_LOG_v1 {"time_micros": 1760091749562049, "job": 38, "event": "compaction_finished", "compaction_time_micros": 89954, "compaction_time_cpu_micros": 40003, "output_level": 6, "num_output_files": 1, "total_output_size": 13814705, "num_input_records": 6643, "num_output_records": 6131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749562600, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749565552, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.469919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.565637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.565641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.565642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.565643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:22:29.565645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.565497284 +0000 UTC m=+0.049921414 container create 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:22:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:29 compute-0 systemd[1]: Started libpod-conmon-9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc.scope.
Oct 10 10:22:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.545029155 +0000 UTC m=+0.029453325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.655201638 +0000 UTC m=+0.139625798 container init 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.666064433 +0000 UTC m=+0.150488553 container start 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.669038977 +0000 UTC m=+0.153463117 container attach 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:22:29 compute-0 magical_blackburn[284953]: 167 167
Oct 10 10:22:29 compute-0 systemd[1]: libpod-9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc.scope: Deactivated successfully.
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.675759429 +0000 UTC m=+0.160183559 container died 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-be706422dff6790400cc93165a9394c90c49d2b17d248b0102f8e127d163af02-merged.mount: Deactivated successfully.
Oct 10 10:22:29 compute-0 podman[284936]: 2025-10-10 10:22:29.718430273 +0000 UTC m=+0.202854393 container remove 9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_blackburn, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:22:29 compute-0 systemd[1]: libpod-conmon-9c7fdbb5873ad6f6b62fddbc4af60d8ed614f5bb5cf1577d9ce4ead96e6cc9fc.scope: Deactivated successfully.
Oct 10 10:22:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:29.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:29 compute-0 podman[284978]: 2025-10-10 10:22:29.916953107 +0000 UTC m=+0.062834653 container create d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:22:29 compute-0 systemd[1]: Started libpod-conmon-d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b.scope.
Oct 10 10:22:29 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638ef1625dcedc433658e7a616ce8693d52b14f0c48a5db75ab15c73fced572a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638ef1625dcedc433658e7a616ce8693d52b14f0c48a5db75ab15c73fced572a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638ef1625dcedc433658e7a616ce8693d52b14f0c48a5db75ab15c73fced572a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638ef1625dcedc433658e7a616ce8693d52b14f0c48a5db75ab15c73fced572a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:29 compute-0 podman[284978]: 2025-10-10 10:22:29.988384702 +0000 UTC m=+0.134266278 container init d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:29 compute-0 podman[284978]: 2025-10-10 10:22:29.894724642 +0000 UTC m=+0.040606228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:29 compute-0 podman[284978]: 2025-10-10 10:22:29.996776238 +0000 UTC m=+0.142657774 container start d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:22:29 compute-0 podman[284978]: 2025-10-10 10:22:29.999785234 +0000 UTC m=+0.145666790 container attach d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]: {
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:     "0": [
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:         {
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "devices": [
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "/dev/loop3"
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             ],
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "lv_name": "ceph_lv0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "lv_size": "21470642176",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "name": "ceph_lv0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "tags": {
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.cluster_name": "ceph",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.crush_device_class": "",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.encrypted": "0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.osd_id": "0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.type": "block",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.vdo": "0",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:                 "ceph.with_tpm": "0"
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             },
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "type": "block",
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:             "vg_name": "ceph_vg0"
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:         }
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]:     ]
Oct 10 10:22:30 compute-0 vigilant_torvalds[284994]: }
Oct 10 10:22:30 compute-0 systemd[1]: libpod-d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b.scope: Deactivated successfully.
Oct 10 10:22:30 compute-0 podman[284978]: 2025-10-10 10:22:30.313599263 +0000 UTC m=+0.459480799 container died d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 10:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-638ef1625dcedc433658e7a616ce8693d52b14f0c48a5db75ab15c73fced572a-merged.mount: Deactivated successfully.
Oct 10 10:22:30 compute-0 podman[284978]: 2025-10-10 10:22:30.365032684 +0000 UTC m=+0.510914230 container remove d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:22:30 compute-0 systemd[1]: libpod-conmon-d25e016ec4af5c1e615f572fa199d08e46d50efd125026ced9d5bac19f05779b.scope: Deactivated successfully.
Oct 10 10:22:30 compute-0 sudo[284869]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:30 compute-0 ceph-mon[73551]: pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:30 compute-0 sudo[285014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:22:30 compute-0 sudo[285014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:30 compute-0 sudo[285014]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:30 compute-0 sudo[285039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:22:30 compute-0 sudo[285039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:30 compute-0 podman[285107]: 2025-10-10 10:22:30.983551304 +0000 UTC m=+0.046764493 container create a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:22:31 compute-0 systemd[1]: Started libpod-conmon-a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42.scope.
Oct 10 10:22:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:31.051409816 +0000 UTC m=+0.114623005 container init a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:31.059664078 +0000 UTC m=+0.122877247 container start a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:30.965277875 +0000 UTC m=+0.028491094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:31.062479817 +0000 UTC m=+0.125692986 container attach a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:22:31 compute-0 affectionate_bell[285123]: 167 167
Oct 10 10:22:31 compute-0 systemd[1]: libpod-a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42.scope: Deactivated successfully.
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:31.06571681 +0000 UTC m=+0.128929999 container died a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba8a9452959be9fa20efb4f86146732cbb7c9085a105c0c94d86951955231e5-merged.mount: Deactivated successfully.
Oct 10 10:22:31 compute-0 podman[285107]: 2025-10-10 10:22:31.099574943 +0000 UTC m=+0.162788112 container remove a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:22:31 compute-0 systemd[1]: libpod-conmon-a1ae9dfeea8df70d42ff997e76f0259f4b19a6d005b398b9f3cb8d1681401f42.scope: Deactivated successfully.
Oct 10 10:22:31 compute-0 podman[285149]: 2025-10-10 10:22:31.281300785 +0000 UTC m=+0.048700556 container create 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:22:31 compute-0 systemd[1]: Started libpod-conmon-06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324.scope.
Oct 10 10:22:31 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:22:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:22:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:31 compute-0 podman[285149]: 2025-10-10 10:22:31.25622747 +0000 UTC m=+0.023627281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb2c929189e44f900b6295170ad3f91e034dd3664e431810edc13bb49a2e442/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb2c929189e44f900b6295170ad3f91e034dd3664e431810edc13bb49a2e442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb2c929189e44f900b6295170ad3f91e034dd3664e431810edc13bb49a2e442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb2c929189e44f900b6295170ad3f91e034dd3664e431810edc13bb49a2e442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:22:31 compute-0 podman[285149]: 2025-10-10 10:22:31.365387121 +0000 UTC m=+0.132786932 container init 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:22:31 compute-0 podman[285149]: 2025-10-10 10:22:31.376981209 +0000 UTC m=+0.144380990 container start 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:22:31 compute-0 podman[285149]: 2025-10-10 10:22:31.380572182 +0000 UTC m=+0.147971963 container attach 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:22:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:31.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:32 compute-0 lvm[285239]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:22:32 compute-0 lvm[285239]: VG ceph_vg0 finished
Oct 10 10:22:32 compute-0 nova_compute[261329]: 2025-10-10 10:22:32.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:32 compute-0 jovial_beaver[285165]: {}
Oct 10 10:22:32 compute-0 systemd[1]: libpod-06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324.scope: Deactivated successfully.
Oct 10 10:22:32 compute-0 systemd[1]: libpod-06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324.scope: Consumed 1.194s CPU time.
Oct 10 10:22:32 compute-0 podman[285149]: 2025-10-10 10:22:32.112425966 +0000 UTC m=+0.879825747 container died 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb2c929189e44f900b6295170ad3f91e034dd3664e431810edc13bb49a2e442-merged.mount: Deactivated successfully.
Oct 10 10:22:32 compute-0 podman[285149]: 2025-10-10 10:22:32.158396414 +0000 UTC m=+0.925796185 container remove 06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:32 compute-0 systemd[1]: libpod-conmon-06d5f548cc85739d180d16fd3f2e7c31659185e8ce9e5b337b9c67dd3392e324.scope: Deactivated successfully.
Oct 10 10:22:32 compute-0 sudo[285039]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:22:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:32 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:22:32 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:32 compute-0 sudo[285254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:22:32 compute-0 sudo[285254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:32 compute-0 sudo[285254]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:33 compute-0 nova_compute[261329]: 2025-10-10 10:22:33.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:33 compute-0 ceph-mon[73551]: pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:33.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:35 compute-0 ceph-mon[73551]: pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:35.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:37 compute-0 nova_compute[261329]: 2025-10-10 10:22:37.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:37.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:37 compute-0 ceph-mon[73551]: pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:22:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:22:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:37.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:38 compute-0 nova_compute[261329]: 2025-10-10 10:22:38.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:39 compute-0 podman[285287]: 2025-10-10 10:22:39.224248482 +0000 UTC m=+0.070281999 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 10:22:39 compute-0 podman[285288]: 2025-10-10 10:22:39.225182901 +0000 UTC m=+0.069346089 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:39 compute-0 podman[285289]: 2025-10-10 10:22:39.257566539 +0000 UTC m=+0.094941382 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:22:39 compute-0 ceph-mon[73551]: pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:22:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:39.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:22:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:41 compute-0 sudo[285353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:41 compute-0 sudo[285353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:41 compute-0 sudo[285353]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:41 compute-0 ceph-mon[73551]: pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:41.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:41.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:22:41.910 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:22:41.910 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:22:41.910 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:42 compute-0 nova_compute[261329]: 2025-10-10 10:22:42.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:43 compute-0 nova_compute[261329]: 2025-10-10 10:22:43.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:43 compute-0 ceph-mon[73551]: pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:43 compute-0 sshd-session[285380]: Accepted publickey for zuul from 192.168.122.10 port 41266 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:22:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:43.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:43 compute-0 systemd-logind[806]: New session 58 of user zuul.
Oct 10 10:22:43 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 10 10:22:43 compute-0 sshd-session[285380]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:22:43 compute-0 sudo[285384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 10 10:22:43 compute-0 sudo[285384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:22:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:45 compute-0 ceph-mon[73551]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:45.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26149 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:22:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25805 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16650 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26158 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:47 compute-0 nova_compute[261329]: 2025-10-10 10:22:47.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:47.220Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:22:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:47.221Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:22:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:47.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25814 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16662 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:47 compute-0 ceph-mon[73551]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:47 compute-0 ceph-mon[73551]: from='client.26149 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3034678619' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:47] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 10 10:22:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:47] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 10 10:22:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:47.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:22:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224328138' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:48 compute-0 nova_compute[261329]: 2025-10-10 10:22:48.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.25805 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.16650 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.26158 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.25814 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.16662 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1845402693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1224328138' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:49 compute-0 ceph-mon[73551]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:49.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:49.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:50 compute-0 ceph-mon[73551]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:51.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:22:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:51.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:22:52 compute-0 nova_compute[261329]: 2025-10-10 10:22:52.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:52 compute-0 podman[285732]: 2025-10-10 10:22:52.438087548 +0000 UTC m=+0.051377500 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:22:52 compute-0 ceph-mon[73551]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:53 compute-0 nova_compute[261329]: 2025-10-10 10:22:53.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:53 compute-0 ovs-vsctl[285778]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 10:22:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:53.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:54 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 10:22:54 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 10:22:54 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:22:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:54 compute-0 ceph-mon[73551]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:54 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: cache status {prefix=cache status} (starting...)
Oct 10 10:22:54 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:54 compute-0 lvm[286093]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:22:54 compute-0 lvm[286093]: VG ceph_vg0 finished
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: client ls {prefix=client ls} (starting...)
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26173 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct 10 10:22:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:22:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25829 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26185 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:55.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16677 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mon[73551]: from='client.26173 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1672425821' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:55.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:22:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736330349' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:22:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25847 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26209 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 10:22:55 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16692 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:22:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661691761' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25862 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16713 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 10:22:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469971866' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25874 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.25829 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.26185 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.16677 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1860179327' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1736330349' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/588890842' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.25847 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.26209 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.16692 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4222220785' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/661691761' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2495586288' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.25862 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.26227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3187780686' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2037339908' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1469971866' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 10:22:56 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16731 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26257 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: ops {prefix=ops} (starting...)
Oct 10 10:22:57 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:57 compute-0 nova_compute[261329]: 2025-10-10 10:22:57.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 10:22:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610000102' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:57.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:57 compute-0 nova_compute[261329]: 2025-10-10 10:22:57.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:57 compute-0 nova_compute[261329]: 2025-10-10 10:22:57.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:22:57 compute-0 nova_compute[261329]: 2025-10-10 10:22:57.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:22:57 compute-0 nova_compute[261329]: 2025-10-10 10:22:57.254 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26281 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 10:22:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242818798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:57] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:22:57] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16764 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:57 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: session ls {prefix=session ls} (starting...)
Oct 10 10:22:57 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.16713 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.25874 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2388601897' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.16731 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.26257 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2477553374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/610000102' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2039649700' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1390305313' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.26281 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4242818798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2988512648' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/778203399' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:22:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:22:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:57.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:22:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:22:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036159140' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: status {prefix=status} (starting...)
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26314 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 nova_compute[261329]: 2025-10-10 10:22:58.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:22:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:22:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328721408' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26362 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:22:58.609+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:58 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:22:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760174204' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.16764 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.25895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2332321882' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2036159140' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.26314 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.25916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3421568049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3063974506' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1401553685' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/512450568' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1035999956' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1328721408' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1488282981' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/832198139' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3760174204' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:22:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:22:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:22:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647674057' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.25967 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:22:59.214+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16857 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:22:59.342+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:22:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.26362 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3421045328' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1758553721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3233252603' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4260196095' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/699035021' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1647674057' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.25967 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2142363579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.16857 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3055579754' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4087547626' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1467050544' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2700707402' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3635718541' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 10:22:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19710570' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:22:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:23:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224783318' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26012 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16902 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 nova_compute[261329]: 2025-10-10 10:23:00.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:00 compute-0 nova_compute[261329]: 2025-10-10 10:23:00.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 10:23:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228736852' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16911 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26473 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/571390437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/19710570' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.26431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1751714302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2444127767' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3224783318' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.26012 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/175764086' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.16902 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3228736852' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/670445082' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2561366827' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16926 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16932 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 sudo[287124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:01 compute-0 sudo[287124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:01 compute-0 sudo[287124]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16944 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 nova_compute[261329]: 2025-10-10 10:23:01.232 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:01 compute-0 nova_compute[261329]: 2025-10-10 10:23:01.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:01 compute-0 nova_compute[261329]: 2025-10-10 10:23:01.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:23:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296408099' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:07.523748+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:08.523892+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:09.524015+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937745 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 335872 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:10.524194+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:11.524409+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 319488 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:12.524568+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:13.524747+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:14.524883+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937745 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:15.524995+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:16.525112+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:17.525372+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 294912 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 66.429580688s of 66.439460754s, submitted: 4
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:18.525524+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:19.525696+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937525 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 294912 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:20.525859+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 286720 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:21.526066+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 253952 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:22.526239+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:23.526431+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:24.526579+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937541 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:25.526701+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:26.526826+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:27.527018+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:28.527202+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:29.527376+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936782 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:30.527525+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:31.527643+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:32.527787+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 131072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.058155060s of 15.093570709s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:33.527929+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:34.528071+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936802 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:35.528250+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a900f9a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce400 session 0x562a907165a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:36.528400+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:37.528590+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:38.528723+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:39.528865+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936802 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:40.529181+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:41.529389+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 65536 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:42.529638+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:43.529782+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:44.529925+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936802 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:45.530073+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:46.530215+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.938276291s of 13.941213608s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 40960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:47.530429+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 40960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:48.530720+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:49.531060+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936934 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:50.531246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:51.531528+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:52.531664+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1048576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:53.531853+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:54.532018+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936950 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:55.532154+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:56.532458+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:57.532669+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:58.532839+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:59.533018+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936950 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:00.533179+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:01.533402+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.524230003s of 14.553894997s, submitted: 9
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:02.533610+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:03.533798+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:04.534132+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936650 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:05.535457+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e768c00 session 0x562a90fdc780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:06.535608+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:07.535789+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:08.535931+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:09.536112+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936802 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:10.536252+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:11.536400+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:12.536571+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:13.536695+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1163264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:14.536826+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936802 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:15.537008+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:16.537249+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.312570572s of 15.316103935s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1146880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:17.537518+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1146880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:18.537647+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:19.537798+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938462 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:20.537918+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:21.538004+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1105920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:22.538159+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:23.538356+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1081344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:24.538496+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938462 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1081344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:25.538674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1081344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:26.538816+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:27.539068+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:28.539211+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1146880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:29.539410+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938462 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1146880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:30.539548+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1146880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:31.539680+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:32.539822+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.427209854s of 15.462188721s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:33.539991+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:34.540136+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938162 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:35.540285+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:36.540417+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1105920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:37.540650+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1105920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:38.540810+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1105920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:39.540992+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:40.541210+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:41.541410+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:42.541570+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:43.541748+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:44.541932+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1081344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:45.542173+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1081344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:46.542418+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1073152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:47.542607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1073152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:48.542783+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1064960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:49.543038+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1064960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:50.543244+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:51.543450+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1064960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:52.543598+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1048576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:53.543725+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1048576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:54.543889+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:55.544298+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:56.544531+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:57.544759+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:58.558365+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:59.558508+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:00.558704+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:01.558906+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:02.559057+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:03.559216+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:04.559411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:05.559585+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:06.559813+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:07.560033+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:08.560191+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:09.560403+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:10.560546+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:11.560684+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:12.560825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:13.560980+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:14.561182+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:15.561350+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:16.561487+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:17.561695+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:18.561869+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:19.562059+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:20.562215+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:21.562385+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:22.562581+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:23.562748+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:24.562904+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:25.563095+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:26.563261+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:27.563434+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:28.563555+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:29.563707+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:30.563892+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:31.564000+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a9159e1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:32.564180+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:33.564369+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 892928 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:34.564520+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 892928 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:35.564753+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:36.564906+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:37.565090+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:38.565221+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:39.565363+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:40.565483+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938314 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:41.565634+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:42.565749+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 70.241386414s of 70.244804382s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:43.565944+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:44.566067+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:45.566249+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938446 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:46.566379+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:47.566551+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:48.566746+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:49.566901+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:50.567057+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939974 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 819200 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:51.567220+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 819200 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:52.567415+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 794624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:53.567753+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 794624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:54.567915+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 786432 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:55.568099+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939367 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 786432 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:56.568262+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 778240 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.256464005s of 14.291040421s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:57.568471+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78897152 unmapped: 753664 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:58.568607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78897152 unmapped: 753664 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:59.568740+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 745472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:00.568881+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 745472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:01.568996+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 737280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:02.569141+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 737280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:03.569277+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 737280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:04.569432+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 729088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:05.569578+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 729088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:06.569778+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 720896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:07.569982+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 720896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:08.570113+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 720896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:09.570276+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 712704 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:10.570434+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 712704 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:11.570607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 712704 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:12.570781+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 696320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:13.570955+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 696320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:14.571108+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78962688 unmapped: 688128 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:15.571276+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78962688 unmapped: 688128 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:16.571443+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:17.571645+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:18.571773+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:19.571915+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 671744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:20.572028+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 671744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:21.572179+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 671744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:22.572300+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 663552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:23.572438+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 663552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:24.572606+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:25.572774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:26.572942+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:27.573130+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 647168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:28.573292+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 647168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:29.573547+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 638976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:30.573831+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 638976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:31.574014+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 638976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:32.574190+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:33.574371+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:34.574545+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:35.574751+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:36.574895+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:37.575070+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:38.575223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:39.575433+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:40.575612+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:41.575815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a914d12c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:42.576013+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:43.576767+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:44.576977+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 589824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:45.577121+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 589824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:46.577413+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 589824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:47.577605+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 581632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:48.577782+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 581632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:49.577957+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 573440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:50.578104+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939235 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 573440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:51.578253+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 573440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.279891968s of 55.283123016s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907ca000 session 0x562a9152f860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:52.578483+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:53.578673+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:54.578841+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:55.579008+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939383 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 557056 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:56.579439+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 557056 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:57.580717+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 548864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:58.580871+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 548864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:59.581047+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 540672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:00.581196+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940895 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 540672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:01.581374+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 540672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:02.581573+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 532480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cb400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.082942009s of 11.101752281s, submitted: 5
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:03.581713+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:04.581859+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:05.582175+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941027 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 516096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:06.582371+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:07.582627+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 499712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:08.582846+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 491520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:09.582999+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 491520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:10.583158+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940304 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 483328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:11.583372+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 483328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:12.583520+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 466944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:13.583672+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 458752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:14.583810+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79192064 unmapped: 458752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.033471107s of 12.077498436s, submitted: 14
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:15.583947+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939697 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 450560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:16.584078+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 450560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:17.584276+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 434176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:18.584434+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 425984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:19.584638+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 425984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:20.584798+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 417792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:21.584948+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 417792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:22.585075+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 417792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7052 writes, 1226 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7052 writes, 29K keys, 7052 commit groups, 1.0 writes per commit group, ingest: 20.45 MB, 0.03 MB/s
                                           Interval WAL: 7052 writes, 1226 syncs, 5.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:23.585213+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 344064 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:24.585423+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 344064 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:25.585588+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 335872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:26.585715+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 335872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:27.585902+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 335872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:28.586079+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 327680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:29.586246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 327680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:30.586509+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:31.586721+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 303104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:32.586896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 303104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:33.587039+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:34.587189+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:35.587384+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:36.587616+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:37.587846+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:38.587993+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:39.588141+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:40.588370+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:41.588548+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:42.588709+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:43.588848+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:44.589010+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:45.589195+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:46.589332+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:47.589505+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:48.589659+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:49.589782+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:50.589913+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:51.590247+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:52.590456+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:53.590607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:54.590755+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:55.590930+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:56.591206+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:57.591431+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:58.591613+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:59.591788+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:00.591976+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 180224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:01.592172+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:02.592774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:03.593057+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:04.593243+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:05.593908+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:06.595074+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 155648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:07.595718+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:08.595892+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:09.596056+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:10.596214+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 139264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:11.596389+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 114688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:12.597019+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:13.597405+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:14.597619+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:15.597967+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:16.598175+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:17.598398+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:18.598592+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 90112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:19.598771+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 90112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:20.598924+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 81920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:21.599051+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 81920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:22.599270+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:23.599443+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:24.599694+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:25.599896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cb400 session 0x562a8ef3ad20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:26.600100+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:27.600307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:28.600493+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:29.600699+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:30.600877+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:31.601053+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:32.601226+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:33.601397+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:34.602052+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:35.602246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:36.602441+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90539c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 81.939918518s of 81.950767517s, submitted: 3
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:37.602642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:38.602852+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:39.603007+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 8192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:40.603179+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939713 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 0 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:41.603352+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:42.603560+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:43.603702+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:44.603837+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:45.603990+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939713 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:46.604158+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:47.604427+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:48.604768+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:49.604913+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:50.605141+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939713 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:51.605453+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:52.605648+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.926178932s of 15.960593224s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:53.605848+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:54.606047+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:55.606180+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939413 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:56.606311+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:57.606470+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:58.606594+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:59.606698+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:00.606815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:01.606940+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:02.607023+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:03.607175+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:04.607383+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:05.607530+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:06.607660+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 933888 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:07.607820+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 933888 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:08.607935+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:09.608089+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:10.608277+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:11.608451+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:12.608614+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 909312 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:13.608803+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:14.608955+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a907f7860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:15.609058+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:16.609499+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:17.609682+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:18.609860+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 876544 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:19.610040+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:20.610191+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939565 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:21.610442+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:22.610627+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:23.610777+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:24.610934+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 860160 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.070625305s of 32.074508667s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:25.611098+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939697 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:26.611378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:27.611567+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:28.611716+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:29.612447+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:30.612618+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941225 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:31.612894+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:32.613047+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:33.613186+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:34.613304+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 1589248 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.718933105s of 10.008526802s, submitted: 200
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:35.613472+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1499136 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941209 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:36.613624+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1499136 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:37.613778+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 1482752 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:38.613960+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 1482752 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:39.614097+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 1482752 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:40.614269+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 1482752 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941209 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:41.614454+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 1482752 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:42.614620+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:43.614801+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a9034cf00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:44.614956+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:45.615109+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:46.615252+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941077 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:47.615407+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:48.615544+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:49.615733+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:50.615899+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:51.616058+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941077 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1449984 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:52.616232+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1433600 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:53.616560+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1433600 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.012695312s of 19.066093445s, submitted: 18
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:54.616669+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:55.616859+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:56.617009+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941209 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:57.617304+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:58.618383+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:59.618574+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1409024 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:00.618784+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 1376256 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:01.618947+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942737 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 1376256 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:02.619095+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1359872 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:03.619278+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1343488 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:04.619463+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1343488 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:05.619642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1343488 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:06.619811+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942130 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1327104 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:07.620066+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1327104 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:08.620451+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1327104 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:09.620683+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1327104 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.879830360s of 15.919183731s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:10.621027+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1294336 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:11.621189+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1294336 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:12.621474+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:13.621741+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:14.622215+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:15.622774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:16.623312+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:17.623701+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:18.624068+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:19.624307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:20.624732+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:21.624919+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:22.625053+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:23.625192+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:24.625349+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:25.625616+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:26.625903+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:27.626222+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:28.626473+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:29.626784+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:30.627012+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:31.627223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1277952 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:32.627460+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:33.627694+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:34.627869+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:35.628118+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:36.628445+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:37.628768+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:38.628898+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:39.629016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:40.629139+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:41.629307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:42.629499+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:43.629719+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:44.629857+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:45.630043+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1261568 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:46.630195+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:47.630387+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:48.630516+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:49.630651+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:50.630808+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:51.631002+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1253376 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:52.631204+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1236992 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:53.631395+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:54.631563+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:55.631710+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:56.631916+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:57.632105+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:58.632241+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:59.632381+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:00.632507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:01.632657+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:02.632815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:03.632960+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:04.633143+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:05.633345+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:06.633487+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:07.633655+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:08.633874+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:09.634054+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:10.634210+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:11.634354+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:12.634550+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:13.634679+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:14.634851+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:15.635081+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:16.635378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:17.635590+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:18.635815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:19.636047+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:20.636283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:21.636460+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:22.636775+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:23.637014+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:24.637265+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:25.637444+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [3])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:26.637592+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:27.637766+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:28.637944+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:29.638093+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:30.638236+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:31.638418+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:32.638595+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:33.638778+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:34.638979+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:35.639147+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:36.639283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:37.639501+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:38.639669+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:39.639856+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:40.640017+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:41.640181+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:42.640411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:43.640570+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:44.640728+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:45.640912+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:46.641124+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:47.641423+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:48.641715+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:49.641965+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:50.642142+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:51.642375+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:52.642529+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:53.642687+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:54.642834+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:55.642979+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:56.643136+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:57.643334+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:58.643486+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:59.643600+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:00.643749+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:01.643914+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:02.644080+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:03.644230+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:04.644479+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:05.644661+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:06.644825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:07.645025+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:08.645179+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:09.645367+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:10.645521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:11.645661+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:12.645862+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:13.646063+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:14.646215+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:15.646435+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:16.646697+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:17.646906+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:18.647041+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:19.647251+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:20.647389+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:21.647637+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:22.647793+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:23.648006+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:24.648170+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:25.648345+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:26.648592+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:27.648769+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:28.648914+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:29.649052+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:30.649212+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:31.649450+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:32.649741+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:33.649874+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:34.650064+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:35.650223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:36.650387+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:37.650545+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:38.650696+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:39.650852+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:40.651012+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:41.651162+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:42.651411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:43.651606+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:44.651798+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:45.652024+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a91942000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:46.652172+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:47.652334+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:48.652498+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:49.652783+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:50.653084+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:51.653250+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:52.653385+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:53.653607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:54.653757+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:55.653895+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 166.029067993s of 166.031768799s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:56.654078+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91045860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942130 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:57.654247+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:58.654451+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:59.654605+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:00.654784+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:01.654922+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943658 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:02.655076+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:03.655268+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:04.655437+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:05.655674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:06.655851+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943658 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:07.656084+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.241256714s of 11.261567116s, submitted: 5
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:08.656384+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 1048576 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:09.656574+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1040384 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:10.656769+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1040384 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:11.656937+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 999424 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943790 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:12.657147+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:13.657316+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:14.657505+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:15.657686+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:16.657846+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943820 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:17.658097+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:18.658378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:19.658648+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:20.658836+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:21.658971+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.572020531s of 14.684833527s, submitted: 16
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:22.659186+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:23.659397+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:24.659559+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:25.660097+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:26.660273+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:27.660466+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907ca000 session 0x562a919950e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:28.660644+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:29.660801+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:30.660959+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:31.661190+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:32.661351+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:33.661487+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:34.661665+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce400 session 0x562a900ac960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:35.661975+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:36.662146+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:37.662535+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:38.662742+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.899217606s of 16.903829575s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:39.662926+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:40.663114+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:41.663401+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:42.663548+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945500 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:43.663680+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:44.664484+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:45.664629+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:46.664760+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:47.664950+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945632 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:48.665091+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:49.665349+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:50.665598+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:51.665773+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.463492393s of 12.501100540s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:52.665916+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946553 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:53.666121+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:54.666297+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:55.666428+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:56.666688+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:57.666901+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946405 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:58.667084+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a91931680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:59.667393+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:00.667565+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:01.667708+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:02.667958+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945814 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.818078995s of 10.910258293s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:03.668117+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:04.668277+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:05.668472+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:06.668660+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:07.668871+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945682 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:08.669045+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:09.669216+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90539c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:10.669423+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:11.669562+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:12.669745+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945830 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:13.670002+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:14.670137+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:15.670307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.875679970s of 12.895929337s, submitted: 6
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 655360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:16.670495+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:17.670695+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945830 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:18.670908+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:19.671057+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:20.671223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:21.671428+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:22.671606+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945223 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:23.672215+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:24.672714+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:25.673233+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cac00 session 0x562a8ef27c20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:26.673420+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:27.673588+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 598016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90860400 session 0x562a8ef4b680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cb400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:28.673778+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:29.674016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:30.674131+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:31.674315+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 573440 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:32.674490+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:33.674618+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:34.674765+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:35.674920+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:36.675094+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:37.675267+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:38.676865+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:39.677018+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:40.677148+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91974d20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:41.677269+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:42.677396+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:43.677517+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:44.677698+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:45.677883+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:46.678088+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:47.678273+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:48.678442+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:49.678568+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:50.678754+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.936725616s of 35.959014893s, submitted: 6
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:51.678913+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:52.679071+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945223 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 540672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:53.679264+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 540672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:54.679412+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 532480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:55.679574+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 532480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:56.679731+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 524288 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:57.679946+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946751 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:58.680122+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:59.680266+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:00.680430+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:01.680674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:02.680844+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946751 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:03.680985+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 475136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.957537651s of 12.999587059s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:04.681119+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:05.681484+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:06.681659+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:07.681858+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946451 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:08.681997+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:09.682156+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:10.682360+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:11.682506+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:12.682636+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:13.682788+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:14.682934+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:15.683192+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:16.683351+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:17.683843+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:18.684016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:19.684238+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:20.684440+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:21.684619+0000)
Oct 10 10:23:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:23:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:22.684815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:23.684987+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:24.685164+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:25.685436+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:26.685571+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:27.685836+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:28.686086+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:29.686344+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:30.686522+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:31.686655+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:32.686909+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:33.687209+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a8ef27e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:34.687407+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:35.687566+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:36.687721+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:37.687935+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:38.688495+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:39.688721+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:40.688892+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:41.689125+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:42.689810+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:43.689970+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:44.690453+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.940834045s of 40.944816589s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:45.690864+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:46.691101+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 425984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:47.691357+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946735 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 425984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:48.691508+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:49.691708+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:50.691870+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cbc00 session 0x562a9145d860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:51.692089+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:52.692251+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948263 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:53.692406+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:54.692567+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:55.692800+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:56.692959+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:57.693110+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947656 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:58.693290+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:59.693443+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:00.693597+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.211167336s of 15.258452415s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:01.693732+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:02.693903+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947656 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:03.694062+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:04.694222+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:05.694382+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:06.694522+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:07.694736+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950696 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:08.694918+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:09.695058+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:10.695224+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:11.695443+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:12.695607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950696 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.754131317s of 12.794165611s, submitted: 12
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:13.695743+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:14.695981+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:15.696136+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:16.696300+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:17.696589+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:18.696753+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:19.696913+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:20.697050+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:21.697224+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:22.697436+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:23.697575+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:24.697715+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:25.697874+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a9190b860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:26.698021+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:27.698212+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:28.698399+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:29.698563+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:30.698686+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:31.698815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:32.699003+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:33.699142+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:34.699363+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:35.699638+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:36.699811+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.348936081s of 23.356918335s, submitted: 2
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,1])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:37.700122+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950089 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:38.700295+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:39.700531+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:40.700725+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:41.700926+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:42.701112+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950105 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:43.701292+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:44.701501+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:45.701640+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:46.701788+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:47.702000+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949346 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:48.702162+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.013302803s of 12.052791595s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:49.702317+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:50.702623+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:51.702851+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:52.703064+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:53.703210+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:54.703373+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:55.703508+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:56.703694+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:57.703921+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90f18400 session 0x562a91995c20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:58.704203+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90861400 session 0x562a9145dc20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:59.704423+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:00.704606+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:01.704822+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:02.704971+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:03.705258+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:04.708422+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:05.708564+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:06.708760+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:07.708975+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:08.709118+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.434091568s of 20.441444397s, submitted: 2
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:09.709270+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:10.709494+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:11.709835+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:12.710016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949055 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:13.710424+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:14.710609+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:15.710788+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:16.711024+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:17.711434+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949055 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:18.711636+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:19.711826+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1425408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:20.712056+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1425408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:21.712268+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:22.712463+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7747 writes, 31K keys, 7747 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7747 writes, 1564 syncs, 4.95 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 695 writes, 1219 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 695 writes, 338 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949039 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:23.712625+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:24.712921+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.051300049s of 16.089799881s, submitted: 12
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:25.713190+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1409024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000044s
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:26.713411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:27.713617+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:28.713821+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:29.713976+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:30.714203+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:31.714391+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:32.714604+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:33.714762+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:34.714913+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:35.715068+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:36.715284+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:37.715568+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:38.715725+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:39.715917+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:40.716101+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:41.716815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:42.717235+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:43.718464+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:44.718597+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:45.718701+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:46.718834+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:47.719029+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:48.719183+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:49.719373+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:50.719532+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:51.719705+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:52.719876+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:53.720034+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:54.720428+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:55.720711+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:56.720864+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:57.721158+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:58.721376+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:59.721578+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:00.721912+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:01.722197+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:02.722401+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:03.722747+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:04.723106+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:05.723359+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:06.723503+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:07.723687+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:08.723861+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:09.724010+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:10.724153+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:11.724299+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:12.724487+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:13.724622+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:14.724745+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:15.724902+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:16.725102+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:17.725296+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:18.725457+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:19.725616+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:20.725801+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:21.725973+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:22.726107+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:23.726305+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:24.726498+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:25.726637+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:26.726792+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:27.727109+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:28.727315+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:29.727583+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:30.727822+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:31.728037+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:32.728235+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:33.728398+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:34.728550+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:35.728774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:36.728925+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:37.729191+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:38.729426+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:39.729642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:40.729869+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:41.730079+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:42.730302+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:43.730519+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:44.751112+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:45.751483+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:46.751806+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:47.752009+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a915d5680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:48.752155+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:49.752414+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:50.752697+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:51.752923+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:52.753144+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:53.753418+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:54.753687+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:55.753903+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:56.754089+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:57.754464+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:58.754705+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90539c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 93.973350525s of 93.981391907s, submitted: 2
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:59.754940+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:00.755164+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:01.755433+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:02.755666+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:03.755864+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:04.756268+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:05.756423+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:06.756693+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:07.757105+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:08.757302+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:09.757563+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:10.757858+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:11.758088+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:12.758298+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1327104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:13.759112+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1327104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.952961922s of 15.011597633s, submitted: 9
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:14.759704+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1318912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:15.760914+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:16.761182+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a91954960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:17.761375+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:18.761593+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:19.762558+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:20.762806+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:21.763129+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:22.763599+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:23.764088+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:24.764500+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:25.764910+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:26.765227+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:27.765631+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.264195442s of 13.267696381s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:28.765942+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948907 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:29.766120+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:30.766519+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1261568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:31.766774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1245184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91974000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:32.766909+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1245184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:33.767126+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950435 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:34.767274+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 974848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:35.767427+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:36.767612+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:37.767846+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:38.768007+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950419 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:39.768221+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:40.768410+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:41.768608+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:42.768797+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 1974272 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.004447937s of 15.682758331s, submitted: 222
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:43.769022+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950551 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1966080 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:44.769265+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1966080 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:45.769446+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:46.769686+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:47.769916+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a915983c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:48.770091+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951947 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:49.770298+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:50.770518+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:51.770683+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:52.770840+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:53.770989+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951947 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:54.771185+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:55.771367+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:56.771529+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:57.771685+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a917fd2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:58.771932+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.269571304s of 15.308749199s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952063 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:59.772093+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:00.772291+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:01.772521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:02.772674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:03.772815+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953459 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:04.772984+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:05.773119+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:06.773361+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:07.773636+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:08.773792+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953459 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.677397728s of 10.700782776s, submitted: 6
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:09.773969+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:10.774159+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:11.774381+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:12.774543+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:13.774709+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952868 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:14.774953+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:15.775159+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:16.775298+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:17.775564+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 2080768 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:18.775760+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952852 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 2080768 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:19.775951+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 2072576 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:20.776152+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.044954300s of 12.094819069s, submitted: 14
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 2048000 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:21.776308+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 2048000 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:22.776481+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:23.776624+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:24.776764+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:25.776928+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:26.777070+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:27.777292+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:28.777570+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:29.777709+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:30.777866+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:31.778031+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:32.778196+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:33.778454+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:34.778613+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:35.778750+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:36.778918+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:37.779139+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:38.779266+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:39.779419+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:40.779562+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:41.779766+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:42.780017+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:43.780175+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:44.780397+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:45.780558+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:46.780725+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:47.780906+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:48.781047+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:49.781289+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:50.781481+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:51.781675+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:52.781901+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:53.782038+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:54.782178+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:55.782365+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:56.782548+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:57.782773+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:58.782910+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:59.783127+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:00.783255+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:01.783484+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:02.783632+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:03.783791+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:04.784090+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:05.784318+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:06.784589+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:07.784784+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:08.784951+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:09.785098+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:10.785241+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:11.785446+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:12.785633+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:13.785797+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:14.786049+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:15.786223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:16.786380+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:17.786666+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:18.786857+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:19.787022+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:20.787156+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:21.787400+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90861400 session 0x562a914d1860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:22.787719+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:23.787948+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:24.788484+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:25.788619+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:26.788756+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:27.788942+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:28.789132+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:29.789283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:30.789428+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:31.789589+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:32.789714+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 71.986579895s of 71.992851257s, submitted: 2
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:33.789842+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952261 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:34.789955+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:35.790096+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:36.790247+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:37.790511+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:38.790691+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:39.790896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:40.791084+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:41.791229+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:42.791416+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:43.791563+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:44.791712+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:45.791836+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:46.791985+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:47.792211+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:48.792388+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.808482170s of 15.905633926s, submitted: 12
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:49.792557+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955001 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:50.792760+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cbc00 session 0x562a918530e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:51.792946+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:52.793251+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:53.793413+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:54.793618+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955153 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:55.793797+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:56.793964+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:57.794160+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:58.794475+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:59.794617+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955153 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:00.794807+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:01.795559+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acf800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.572267532s of 12.576242447s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:02.795970+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:03.796089+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:04.796502+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955285 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:05.796741+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:06.796871+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:07.797077+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:08.797261+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:09.797395+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:10.797545+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:11.797677+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:12.797805+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:13.797963+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:14.798109+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954103 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:15.798270+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:16.798386+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:17.798585+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.693298340s of 15.739171028s, submitted: 12
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:18.798713+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:19.798854+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953971 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acfc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:20.799041+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a91acf800 session 0x562a907f7c20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:21.799214+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _renew_subs
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:22.799376+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 18440192 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 146 ms_handle_reset con 0x562a91acfc00 session 0x562a900f9a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:23.799551+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd6ea17/0xe28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18292736 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _renew_subs
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd6ea17/0xe28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:24.799726+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192488 data_alloc: 218103808 data_used: 143360
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 18251776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 147 ms_handle_reset con 0x562a8e8d3000 session 0x562a8db550e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:25.799910+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 18251776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa56e000/0x0/0x4ffc00000, data 0x21e09f9/0x229c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:26.800061+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:27.800251+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:28.800441+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:29.800615+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194146 data_alloc: 218103808 data_used: 143360
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:30.800811+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:31.800976+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.001250267s of 14.248284340s, submitted: 57
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:32.801162+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:33.801378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:34.803951+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193438 data_alloc: 218103808 data_used: 143360
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 18227200 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:35.804142+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 18227200 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:36.804307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 18219008 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:37.804507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:38.804641+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:39.804812+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193454 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:40.804976+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:41.805115+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:42.805255+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:43.805420+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:44.805559+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193454 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:45.805734+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:46.805884+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.964175224s of 15.001649857s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 18186240 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:47.806061+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 18186240 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:48.806228+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:49.806480+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193154 data_alloc: 218103808 data_used: 139264
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:50.806608+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:51.806749+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:52.806909+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:53.807226+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:54.807378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193306 data_alloc: 218103808 data_used: 143360
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:55.807590+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:56.807732+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:57.807924+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:58.808181+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:59.808400+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193306 data_alloc: 218103808 data_used: 143360
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:00.808537+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:01.808720+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:02.808883+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a907cbc00 session 0x562a91973c20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a90861400 session 0x562a91852b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e768000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a8e768000 session 0x562a90934000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:03.809069+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a904a4000 session 0x562a9159fa40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 18161664 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f645a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:04.809253+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a907cbc00 session 0x562a9086fa40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223858 data_alloc: 234881024 data_used: 11616256
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 6692864 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a90861400 session 0x562a90991a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acfc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:05.809401+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.849899292s of 18.852664948s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 96051200 unmapped: 6676480 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:06.809548+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a91acfc00 session 0x562a9152fc20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 5562368 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a8e8d3000 session 0x562a91ae34a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a904a4000 session 0x562a9076a780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a907cbc00 session 0x562a915d5e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a90861400 session 0x562a91975860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:07.809735+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 5480448 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:08.809909+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9f6c000/0x0/0x4ffc00000, data 0x27ded21/0x289e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 5447680 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:09.810235+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283676 data_alloc: 234881024 data_used: 11616256
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:10.810407+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:11.810605+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:12.810934+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:13.811099+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x27e0cf3/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:14.811261+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120cc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a907f70e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287641 data_alloc: 234881024 data_used: 11616256
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 5079040 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:15.811429+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 5079040 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:16.811578+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1777664 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:17.811800+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.755753517s of 11.909161568s, submitted: 48
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:18.811998+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:19.812177+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331825 data_alloc: 234881024 data_used: 17592320
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:20.812365+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:21.812601+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:22.813249+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:23.813498+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:24.813650+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331825 data_alloc: 234881024 data_used: 17592320
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:25.813859+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 729088 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:26.814046+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 729088 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:27.814213+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.845973015s of 10.017697334s, submitted: 63
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 4096000 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:28.814364+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:29.814545+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406217 data_alloc: 234881024 data_used: 18051072
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:30.814674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8591000/0x0/0x4ffc00000, data 0x3011d16/0x30d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:31.814841+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 4136960 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:32.814984+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8591000/0x0/0x4ffc00000, data 0x3011d16/0x30d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4104192 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:33.815175+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4104192 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:34.815286+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400737 data_alloc: 234881024 data_used: 18055168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:35.815463+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:36.815636+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:37.815814+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:38.816119+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x3035d16/0x30f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:39.816372+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400737 data_alloc: 234881024 data_used: 18055168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:40.816520+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514300346s of 13.687804222s, submitted: 45
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:41.816666+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:42.816806+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:43.816968+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f856b000/0x0/0x4ffc00000, data 0x303fd16/0x3101000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:44.817108+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400729 data_alloc: 234881024 data_used: 18055168
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:45.817301+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:46.817469+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f856b000/0x0/0x4ffc00000, data 0x303fd16/0x3101000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:47.817693+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:48.817830+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36c00 session 0x562a9190af00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36800 session 0x562a909343c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36400 session 0x562a91045a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108019712 unmapped: 5193728 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a917fdc20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:49.817999+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c400 session 0x562a91995860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400917 data_alloc: 234881024 data_used: 18579456
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8568000/0x0/0x4ffc00000, data 0x3042d16/0x3104000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 3784704 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91663e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:50.818119+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a916623c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33000 session 0x562a9076be00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:51.818239+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:52.819669+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9190b2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:53.819830+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:54.819957+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413158 data_alloc: 234881024 data_used: 18579456
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:55.820138+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:56.820341+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:57.820494+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:58.820642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.070981979s of 18.176597595s, submitted: 30
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:59.820759+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412966 data_alloc: 234881024 data_used: 18579456
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:00.820997+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:01.821158+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849c000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:02.821314+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:03.821469+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 3211264 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:04.821646+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849c000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416818 data_alloc: 234881024 data_used: 19197952
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:05.821818+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:06.822111+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849e000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:07.822346+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:08.822515+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:09.822679+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.322134018s of 10.349461555s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416834 data_alloc: 234881024 data_used: 19193856
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:10.822824+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:11.822979+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849b000/0x0/0x4ffc00000, data 0x310ed78/0x31d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:12.823110+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 3170304 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:13.823311+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110526464 unmapped: 3735552 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:14.823521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443015 data_alloc: 234881024 data_used: 19501056
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26509 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 3612672 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:15.823780+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 3612672 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:16.823932+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810f000/0x0/0x4ffc00000, data 0x349ad78/0x355d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:17.824228+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:18.824425+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:19.824593+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450314 data_alloc: 234881024 data_used: 19574784
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:20.824750+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.018276215s of 11.213406563s, submitted: 61
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:21.824887+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f80ee000/0x0/0x4ffc00000, data 0x34bbd78/0x357e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:22.825026+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:23.825190+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32c00 session 0x562a9086f0e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9086e1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:24.825392+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f855b000/0x0/0x4ffc00000, data 0x304cd16/0x310e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405668 data_alloc: 234881024 data_used: 18579456
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:25.825692+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:26.825898+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:27.826127+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:28.826292+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:29.826514+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a9197ef00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a91974b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405668 data_alloc: 234881024 data_used: 18579456
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:30.826635+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91973e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:31.826851+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:32.827032+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:33.827199+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:34.827347+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:35.827557+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:36.827769+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:37.827988+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:38.828269+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:39.828501+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:40.828725+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:41.828895+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:42.829058+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:43.829249+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:44.829387+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:45.829538+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:46.829650+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:47.829794+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:48.829916+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:49.830172+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:50.830371+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:51.830587+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:52.830737+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:53.830894+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:54.831061+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:55.831185+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.818191528s of 34.928936005s, submitted: 42
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33000 session 0x562a90f943c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a915d4960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a910454a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a916625a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a914d1a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:56.831365+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:57.831521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:58.831670+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:59.831896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120cc00 session 0x562a919952c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287964 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:00.832075+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:01.832212+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:02.832379+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:03.832534+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a919554a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9152f860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:04.832744+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a8e1bfc20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91930d20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289778 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:05.832985+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:06.833136+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:07.833405+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 10698752 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:08.834002+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:09.834414+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306222 data_alloc: 234881024 data_used: 14807040
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:10.834716+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.633553505s of 14.775516510s, submitted: 42
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:11.835086+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:12.835393+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:13.835676+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:14.835985+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306202 data_alloc: 234881024 data_used: 14802944
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:15.836118+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:16.836246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:17.836415+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 8740864 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:18.836672+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 9183232 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bf4000/0x0/0x4ffc00000, data 0x29b6d54/0x2a78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:19.837230+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356540 data_alloc: 234881024 data_used: 15572992
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:20.837527+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:21.837727+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:22.837862+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:23.838091+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:24.838288+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.645743370s of 13.833170891s, submitted: 62
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:25.838469+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:26.838693+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:27.838932+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:28.839192+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:29.839491+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 8978432 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:30.839689+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:31.839912+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:32.840066+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:33.840248+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:34.840426+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:35.840638+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32000 session 0x562a91994000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:36.840830+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 8962048 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:37.841035+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 8962048 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:38.841193+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 8953856 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:39.841441+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357928 data_alloc: 234881024 data_used: 15659008
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:40.841646+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:41.841860+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:42.842044+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:43.842195+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:44.842361+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.593759537s of 20.597139359s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a91844780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a919734a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355416 data_alloc: 234881024 data_used: 15663104
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:45.842491+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 9920512 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90944b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c2000/0x0/0x4ffc00000, data 0x21e8cf2/0x22a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:46.842644+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 9920512 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:47.842817+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:48.842969+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:49.843134+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:50.843283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:51.843440+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:52.843569+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:53.843722+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:54.843915+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:55.844060+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.477752686s of 10.626935005s, submitted: 46
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:56.844223+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:57.844490+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:58.844620+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:59.844775+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:00.844906+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:01.845064+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:02.845231+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:03.845387+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:04.845534+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:05.845680+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262537 data_alloc: 234881024 data_used: 12136448
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:06.845832+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:07.846036+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:08.846176+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a9159f2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:09.846378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:10.846516+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262689 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:11.846661+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a8e6645a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a915990e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a9086e3c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a916632c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.502140045s of 16.509193420s, submitted: 2
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:12.847507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a919941e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a8daeb4a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8daea5a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a90fdc3c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a8e664f00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:13.848083+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:14.848405+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:15.849003+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323294 data_alloc: 234881024 data_used: 12140544
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a900f9e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:16.849687+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a8f39a780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:17.850585+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90fde960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a90fdde00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:18.850762+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:19.851036+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:20.851498+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327094 data_alloc: 234881024 data_used: 12079104
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 21651456 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:21.851895+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:22.852128+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:23.852278+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:24.852676+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:25.853183+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376342 data_alloc: 234881024 data_used: 19419136
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:26.853486+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:27.853876+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:28.854083+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:29.854226+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:30.854406+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376342 data_alloc: 234881024 data_used: 19419136
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.519599915s of 18.695911407s, submitted: 27
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:31.854654+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 15073280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8955000/0x0/0x4ffc00000, data 0x2c4fd55/0x2d11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:32.854952+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:33.855108+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:34.855269+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:35.855430+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:36.855595+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:37.855765+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:38.855896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a903ce000 session 0x562a911ea1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:39.856034+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:40.856160+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:41.856306+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:42.856503+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:43.856675+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:44.856856+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:45.857031+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:46.857132+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 14016512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:47.857309+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 14016512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:48.857519+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a8e800960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91237400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91237400 session 0x562a918a23c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 13541376 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:49.857647+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.576541901s of 18.732076645s, submitted: 63
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a918a2b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:50.857835+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430056 data_alloc: 234881024 data_used: 19537920
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:51.858034+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:52.858479+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a900ad860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 13443072 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:53.858584+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 13443072 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91237400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91237400 session 0x562a90fdd860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:54.858730+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a8f45a1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9159f680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13123584 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:55.858868+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435746 data_alloc: 234881024 data_used: 19537920
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13123584 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:56.859031+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 11182080 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:57.859166+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 11059200 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:58.859275+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 11059200 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:59.859417+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:00.859634+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454290 data_alloc: 234881024 data_used: 22233088
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:01.859796+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:02.859957+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:03.860069+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9159e780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:04.860177+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 11042816 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:05.860315+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454290 data_alloc: 234881024 data_used: 22233088
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 11042816 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:06.860443+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.009700775s of 17.099123001s, submitted: 14
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 11681792 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:07.860607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 11264000 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:08.860744+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:09.860949+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:10.861081+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504442 data_alloc: 234881024 data_used: 22503424
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:11.861263+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:12.861399+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 11206656 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:13.861517+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 11173888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9152f2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9197f2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:14.861665+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90ed1a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:15.861800+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407567 data_alloc: 234881024 data_used: 19537920
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:16.862068+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:17.862461+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.225418091s of 11.423836708s, submitted: 57
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:18.862599+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:19.862732+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:20.862938+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407567 data_alloc: 234881024 data_used: 19537920
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:21.863110+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:22.863402+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a900f9680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9509 writes, 36K keys, 9509 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9509 writes, 2350 syncs, 4.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1762 writes, 5246 keys, 1762 commit groups, 1.0 writes per commit group, ingest: 4.67 MB, 0.01 MB/s
                                           Interval WAL: 1762 writes, 786 syncs, 2.24 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 12353536 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:23.863622+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a91931860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:24.863770+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:25.864081+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278221 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:26.864305+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:27.864645+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:28.864814+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:29.865239+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:30.865582+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278221 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:31.866005+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.437479019s of 13.498319626s, submitted: 21
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:32.866265+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:33.866477+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:34.866687+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:35.867000+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:36.867144+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:37.867378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:38.867562+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:39.867854+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:40.868124+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:41.868419+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:42.868600+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:43.868767+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:44.868905+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:45.869120+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:46.869283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:47.869541+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:48.869705+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:49.869840+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.384731293s of 18.389318466s, submitted: 1
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a9086f2c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 18341888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:50.870012+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308481 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 18341888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:51.870217+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:52.870422+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:53.870583+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:54.870758+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:55.870965+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308481 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:56.871131+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 18317312 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:57.871389+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 18317312 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:58.871602+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:59.871795+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:00.871956+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318361 data_alloc: 234881024 data_used: 12898304
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:01.872156+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:02.872350+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:03.872549+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:04.872821+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:05.873002+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318361 data_alloc: 234881024 data_used: 12898304
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:06.873180+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:07.873398+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:08.873550+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.978237152s of 19.004953384s, submitted: 15
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115548160 unmapped: 14548992 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:09.873680+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c36000/0x0/0x4ffc00000, data 0x2975ce3/0x2a35000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 16457728 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:10.873830+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361853 data_alloc: 234881024 data_used: 13336576
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:11.873981+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:12.874131+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:13.874279+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:14.874370+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:15.874522+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361853 data_alloc: 234881024 data_used: 13336576
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:16.874711+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:17.874873+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:18.875001+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:19.875163+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:20.875307+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361869 data_alloc: 234881024 data_used: 13336576
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:21.875521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:22.875705+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:23.875861+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:24.876226+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:25.876366+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361869 data_alloc: 234881024 data_used: 13336576
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:26.876572+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:27.876802+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:28.877037+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:29.877187+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:30.877346+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362021 data_alloc: 234881024 data_used: 13340672
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:31.877600+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:32.877819+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:33.877991+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:34.878150+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:35.878366+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.573186874s of 26.738054276s, submitted: 88
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a91044000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433977 data_alloc: 234881024 data_used: 13340672
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 27410432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:36.878576+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:37.878776+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810d000/0x0/0x4ffc00000, data 0x349fce3/0x355f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:38.878921+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a90fdc1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:39.879096+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a8db54b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:40.879243+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a907f74a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9034da40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437092 data_alloc: 234881024 data_used: 13340672
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:41.879386+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 27377664 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:42.879630+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 27361280 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:43.879806+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:44.880016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:45.880149+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508276 data_alloc: 234881024 data_used: 23834624
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:46.880272+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:47.880415+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:48.880533+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:49.880653+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:50.880785+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508276 data_alloc: 234881024 data_used: 23834624
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:51.880933+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:52.881075+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 21209088 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.664892197s of 16.756015778s, submitted: 18
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:53.881195+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124370944 unmapped: 17276928 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:54.881395+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:55.881568+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:56.881717+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:57.881941+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:58.882078+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:59.882216+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:00.882473+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:01.882622+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:02.882753+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:03.882910+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:04.883065+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:05.883226+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 17178624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:06.883417+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 17178624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:07.883656+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 17170432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:08.883840+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 17170432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:09.884095+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:10.884267+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:11.884482+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.341291428s of 19.435050964s, submitted: 39
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:12.884608+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 17727488 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:13.884890+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:14.885027+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:15.885227+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:16.885415+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:17.885610+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:18.885772+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:19.885924+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:20.886060+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:21.886196+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:22.886349+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:23.886493+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:24.886830+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:25.887633+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907ca400 session 0x562a8e275a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91abfc00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907ca000 session 0x562a917fc960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:26.887783+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:27.888063+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907cb400 session 0x562a917fc000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e768400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:28.888190+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:29.888414+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a8f408780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e769c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:30.888607+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:31.888799+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:32.888963+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:33.889181+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.640014648s of 21.650600433s, submitted: 14
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 17694720 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:34.889314+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 19070976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:35.889511+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122757120 unmapped: 18890752 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568870 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:36.889642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 18849792 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:37.889838+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:38.890214+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:39.890397+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:40.890531+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90ed10e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a90716960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568870 data_alloc: 234881024 data_used: 24580096
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:41.890692+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a915d5680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:42.890825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:43.891058+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:44.891281+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:45.891506+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366479 data_alloc: 234881024 data_used: 13328384
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:46.891688+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:47.891873+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.427377701s of 14.190353394s, submitted: 253
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8e6650e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:48.892017+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a914d1c20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:49.892210+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:50.892420+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:51.892568+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:52.892699+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:53.892871+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:54.893043+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:55.893228+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:56.893396+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:57.893566+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:58.893774+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:59.894018+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:00.894173+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:01.894370+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:02.894521+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:03.894716+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:04.894852+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:05.895051+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:06.895232+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:07.895434+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:08.895623+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:09.895800+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:10.895989+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:11.896174+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:12.896296+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:13.896420+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:14.896557+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:15.896689+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.166027069s of 28.204250336s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f043860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:16.896887+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305609 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:17.897106+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:18.897246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:19.897529+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a8ef3ad20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 27049984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x2302d06/0x23c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:20.897689+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 27049984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:21.898083+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307030 data_alloc: 234881024 data_used: 11579392
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:22.898194+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:23.898342+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:24.898488+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x2302d06/0x23c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:25.898674+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a90ed0000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:26.898805+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.468525887s of 10.493452072s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297160 data_alloc: 234881024 data_used: 11554816
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f943c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:27.899002+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:28.899146+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:29.899287+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:30.899508+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:31.900029+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296604 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:32.900534+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:33.901054+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:34.901600+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:35.901764+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:36.902310+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296604 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:37.902663+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:38.902907+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 26853376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.552070618s of 12.580025673s, submitted: 13
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90fdf4a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:39.903119+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:40.903312+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:41.903673+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368766 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:42.904246+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:43.904487+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 26230784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:44.904785+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 26230784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a910441e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:45.905023+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 26591232 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:46.905237+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370402 data_alloc: 234881024 data_used: 11554816
Oct 10 10:23:01 compute-0 ceph-osd[81941]: mgrc ms_handle_reset ms_handle_reset con 0x562a8db47400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 10:23:01 compute-0 ceph-osd[81941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: get_auth_request con 0x562a90861400 auth_method 0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: mgrc handle_mgr_configure stats_period=5
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 26451968 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:47.905519+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 26451968 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:48.905649+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:49.905825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b0f000/0x0/0x4ffc00000, data 0x2a9dce3/0x2b5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:50.906000+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b0f000/0x0/0x4ffc00000, data 0x2a9dce3/0x2b5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:51.906209+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422690 data_alloc: 234881024 data_used: 19124224
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:52.906446+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8e1bfe00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8ef27e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.924575806s of 13.989303589s, submitted: 22
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8f45b680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:53.906621+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:54.906777+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:55.907010+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:56.907183+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:57.907388+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:58.907525+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:59.907690+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:00.907829+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:01.907982+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:02.908135+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:03.908296+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:04.908448+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:05.908615+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:06.908819+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:07.908991+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:08.909149+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:09.909350+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:10.909554+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:11.909699+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:12.909971+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:13.910129+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:14.910242+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:15.910371+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.160041809s of 23.180587769s, submitted: 10
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed0780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:16.910536+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343576 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:17.910707+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:18.910839+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:19.910991+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec1000/0x0/0x4ffc00000, data 0x26ebce3/0x27ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:20.911150+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:21.911312+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343576 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:22.911507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:23.911651+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec1000/0x0/0x4ffc00000, data 0x26ebce3/0x27ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:24.911873+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a8e274b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:25.912043+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 27803648 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:26.912173+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346674 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 27779072 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:27.912404+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:28.912599+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:29.912740+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:30.912907+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:31.913038+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381786 data_alloc: 234881024 data_used: 16793600
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:32.913199+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:33.913363+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:34.913507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:35.914096+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:36.914225+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382242 data_alloc: 234881024 data_used: 16805888
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.772865295s of 20.838567734s, submitted: 15
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 25526272 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:37.943005+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c48000/0x0/0x4ffc00000, data 0x2963d06/0x2a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 24928256 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:38.943153+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:39.943278+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:40.943490+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:41.943654+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:42.943819+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:43.943977+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:44.944114+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:45.944277+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:46.944448+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:47.945027+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:48.945238+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:49.945451+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:50.945670+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:51.945868+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8ef4a780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.981020927s of 15.057462692s, submitted: 16
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a919943c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 24870912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:52.946025+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9197f860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:53.946160+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:54.946283+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:55.946511+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:56.946630+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:57.946867+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:58.947090+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:59.947260+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:00.947409+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:01.947596+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:02.947740+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:03.947896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:04.948028+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:05.948229+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:06.948417+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:07.948662+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:08.948837+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:09.949045+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:10.949400+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:11.949485+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:12.949602+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:13.949737+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:14.949895+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:15.950073+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:16.950188+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:17.950364+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.378128052s of 25.476999283s, submitted: 31
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a9086f0e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:18.950460+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:19.950569+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e38000/0x0/0x4ffc00000, data 0x2364ce3/0x2424000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:20.950761+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a90934780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9084a3c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 28270592 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:21.950907+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9089c000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9089c000 session 0x562a919752c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335038 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f64b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 28286976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:22.950981+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 28286976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:23.951111+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:24.951437+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:25.951605+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:26.951782+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347681 data_alloc: 234881024 data_used: 12922880
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:27.951997+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:28.952141+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:29.952269+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:30.952390+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:31.952544+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347681 data_alloc: 234881024 data_used: 12922880
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:32.952670+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:33.952845+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.913131714s of 15.963829994s, submitted: 17
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 21536768 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:34.953012+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 22929408 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a918a3a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8e7d50e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:35.953128+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a91598d20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e4c2400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e4c2400 session 0x562a91ae2780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed05a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a9159e780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a9159e5a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8f408b40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e4c3400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e4c3400 session 0x562a8f4085a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 21749760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:36.953276+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455274 data_alloc: 234881024 data_used: 13545472
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 21733376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:37.953478+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 21733376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:38.953624+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f4094a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 21725184 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8f408960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:39.953822+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 21700608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:40.954721+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 21692416 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:41.955405+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a9076b4a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9076af00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455290 data_alloc: 234881024 data_used: 13545472
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 21692416 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f49000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:42.955699+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119963648 unmapped: 21684224 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:43.955833+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 18702336 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:44.955957+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17358848 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:45.956213+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:46.956384+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512442 data_alloc: 234881024 data_used: 22044672
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 17317888 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:47.956586+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17309696 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:48.956711+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17309696 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:49.956843+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124370944 unmapped: 17276928 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:50.956981+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17358848 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:51.957183+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512442 data_alloc: 234881024 data_used: 22044672
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:52.957411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:53.957888+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.338996887s of 20.593862534s, submitted: 102
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 11083776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:54.958378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 10895360 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:55.958642+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7821000/0x0/0x4ffc00000, data 0x3977d87/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:56.958922+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596236 data_alloc: 234881024 data_used: 23085056
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:57.959125+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:58.959265+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:59.959602+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:00.959805+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:01.960059+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596236 data_alloc: 234881024 data_used: 23085056
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:02.960254+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:03.960550+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:04.960678+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.052834511s of 11.249748230s, submitted: 88
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:05.960844+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:06.961026+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1595476 data_alloc: 234881024 data_used: 23089152
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:07.961239+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:08.961481+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:09.961790+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:10.961949+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:11.962082+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596388 data_alloc: 234881024 data_used: 23158784
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:12.962245+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:13.962413+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:14.962575+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f49000 session 0x562a9076b680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9086ed20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:15.962731+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f84bf000/0x0/0x4ffc00000, data 0x26e0d16/0x27a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:16.962825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392317 data_alloc: 234881024 data_used: 13545472
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:17.962989+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f84bf000/0x0/0x4ffc00000, data 0x26e0d16/0x27a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:18.963178+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:19.963381+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8f043860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90f952c0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.779786110s of 14.850324631s, submitted: 25
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a9190ba40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:20.963525+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:21.963634+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:22.963760+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:23.963939+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:24.964048+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:25.964183+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:26.964373+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:27.964559+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:28.964682+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:29.965016+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:30.965166+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:31.965409+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:32.965567+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:33.965779+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:34.965946+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:35.966071+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:36.966205+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:37.966416+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:38.966621+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:39.966773+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:40.966963+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:41.967103+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:42.967389+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:43.967602+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:44.968407+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:45.969467+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a91045a40
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed0d20
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a91995860
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8f408780
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:46.969611+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.354364395s of 26.400856018s, submitted: 19
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a900ac5a0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364392 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:47.970235+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:48.970411+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:49.971041+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:50.971211+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:51.971363+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f49000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f49000 session 0x562a9197e000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364392 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:52.991228+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9084a1e0
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90f94f00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:53.991399+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a90f94000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 19349504 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:54.991591+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 19349504 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:55.991721+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:56.991912+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398511 data_alloc: 234881024 data_used: 15749120
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:57.992085+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:58.992266+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:59.992524+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:00.992668+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:01.992825+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398511 data_alloc: 234881024 data_used: 15749120
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:02.992969+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:03.993136+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:04.993256+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.759384155s of 18.817558289s, submitted: 11
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:05.993379+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:06.993633+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448577 data_alloc: 234881024 data_used: 15872000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:07.993959+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:08.994094+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:09.994253+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:10.994466+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:11.994656+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448577 data_alloc: 234881024 data_used: 15872000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:12.994837+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:13.995057+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:14.995263+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:15.995378+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a918a2960
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a91955e00
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:16.995507+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.116869926s of 11.250986099s, submitted: 27
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 18972672 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f45b680
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:17.995896+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:18.996031+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:19.996205+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:20.996311+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:21.996524+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:22.996644+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:23.996818+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:24.996960+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:25.997077+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:26.997213+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:27.997412+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:28.997577+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:29.997720+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:30.997885+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:31.997994+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:32.998143+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:33.998309+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:34.998514+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:35.998635+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:36.998862+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:37.999111+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:38.999266+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:39.999412+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:40.999553+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:41.999782+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:42.999961+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:44.000140+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:45.000291+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:46.000503+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:47.000749+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:48.000995+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:49.001453+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:50.001620+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:51.001750+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:52.001914+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:53.002082+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:54.002204+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:55.002387+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:56.002517+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:57.002666+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:58.002828+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:59.002963+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:00.003072+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:01.003198+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:02.003365+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:03.003516+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:04.003703+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:05.003823+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:06.004008+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:07.004116+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:08.004257+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:09.004399+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:10.004528+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:11.004635+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:12.004763+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:13.004872+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:14.005017+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:15.005162+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:16.005304+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:17.005483+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:18.005633+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:19.005748+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:20.005862+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:21.006078+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:22.006209+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:23.006375+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:24.006492+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:25.006757+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:26.006885+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:27.006994+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:28.007138+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 18857984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:23:01 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}'
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:29.007262+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:30.007360+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 19054592 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:31.007475+0000)
Oct 10 10:23:01 compute-0 ceph-osd[81941]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:01.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.16911 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.26030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.26473 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/356895798' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.16926 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/320126284' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.26042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.16932 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1569461498' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.16944 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2296408099' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.26057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/127520907' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2579803735' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/196607994' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26530 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16968 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 nova_compute[261329]: 2025-10-10 10:23:02.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:23:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643359259' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.16983 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26572 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:23:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633175545' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.26509 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.16953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1623996306' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.26530 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3971028922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.16968 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3643359259' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.26093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.26557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1196706439' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/661113299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2633175545' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-0 crontab[287452]: (root) LIST (root)
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26126 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17001 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 10:23:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914417947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.236 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.316 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.316 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.316 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.316 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.317 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17022 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17025 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:23:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748618102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.761 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:23:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:03.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17052 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26156 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26638 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.16983 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.26572 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3336206771' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.26126 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.17001 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.26587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/914417947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/563292797' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.17022 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.17025 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/781763434' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/748618102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.914 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.915 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4434MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.915 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.916 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.994 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:23:03 compute-0 nova_compute[261329]: 2025-10-10 10:23:03.995 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.027 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:23:04 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17067 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26165 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:23:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442204986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.544 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.550 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.577 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.579 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:23:04 compute-0 nova_compute[261329]: 2025-10-10 10:23:04.579 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 10 10:23:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743398584' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17088 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.26614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.17052 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.26156 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.26638 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3878076498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2622965518' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.17067 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/54031956' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2189172015' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.26165 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1788666203' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1918956957' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3442204986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/743398584' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2057841537' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3927813275' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/983784123' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 10:23:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627566640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 10 10:23:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961890703' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 10:23:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102851143' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-0 nova_compute[261329]: 2025-10-10 10:23:05.580 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 10:23:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358533371' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 10:23:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908035813' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1496940488' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/627566640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1961890703' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3007931880' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1496720933' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/166812341' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4102851143' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3249578032' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2734253997' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1358533371' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1178817591' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1692194911' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3908035813' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/574716574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270522901' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276084272' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:23:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993800113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190481607' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214331611' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26773 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:06 compute-0 systemd[1]: Starting Hostname Service...
Oct 10 10:23:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432505021' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1245277696' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/362286490' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3456937404' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4270522901' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3276084272' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/496033978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3035571500' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/330407203' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1993800113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2553354890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/190481607' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3152055571' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/341481678' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/214331611' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:07 compute-0 nova_compute[261329]: 2025-10-10 10:23:07.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:07 compute-0 systemd[1]: Started Hostname Service.
Oct 10 10:23:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:23:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:07.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 10:23:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301091987' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26794 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:23:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 10:23:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77092020' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17247 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 10:23:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85375489' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26312 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2432505021' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3212453945' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1152939535' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2358813671' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1301091987' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.26794 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/77092020' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2749167785' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/85375489' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26318 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26830 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 nova_compute[261329]: 2025-10-10 10:23:08.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17265 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26327 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26854 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17289 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26351 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26878 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:08.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:23:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:08 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26303 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.17247 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26312 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26318 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26830 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/311730347' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.17265 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.17271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26327 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.26854 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1010273267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2112620568' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3785517594' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 10:23:09 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805709205' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26366 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26905 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17328 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 10:23:09 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563491473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26926 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:09 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:09.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:09 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26399 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297017605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.17289 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.26351 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.26878 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3805709205' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.26366 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1170104191' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1415916768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.26905 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.17328 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/892024631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/563491473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4079194258' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3580709824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3297017605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17373 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 podman[288445]: 2025-10-10 10:23:10.245236488 +0000 UTC m=+0.074216854 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:23:10 compute-0 podman[288456]: 2025-10-10 10:23:10.263043763 +0000 UTC m=+0.098176394 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:23:10 compute-0 podman[288457]: 2025-10-10 10:23:10.2812455 +0000 UTC m=+0.114730079 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 10 10:23:10 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26405 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1792969292' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26989 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.26384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.26926 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.26399 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.17373 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.26405 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/71675879' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1792969292' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1651511276' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 10:23:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447989855' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:11 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17463 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:11 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26462 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 10:23:12 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1227914892' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-0 nova_compute[261329]: 2025-10-10 10:23:12.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.17397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.26989 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/447989855' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/366803169' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3592385461' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3216819042' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/804085901' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1227914892' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 10 10:23:12 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588161703' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27064 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:12 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 10 10:23:12 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769252938' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-0 nova_compute[261329]: 2025-10-10 10:23:13.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:13 compute-0 ceph-mon[73551]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.17463 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.26462 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2970845557' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1570390407' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3588161703' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4049190113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2769252938' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2793981116' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 10 10:23:13 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638418532' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:13 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17517 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:13 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26501 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: from='client.27064 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/668363965' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/638418532' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4223693866' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3098976176' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27103 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 10 10:23:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482729441' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 10:23:14 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2720008755' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:14 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27127 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17541 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.26501 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.27103 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2482729441' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1056008335' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1135114031' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2720008755' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1333227027' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26522 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27139 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 10 10:23:15 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/856513782' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:15 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17565 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:15.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26540 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.17541 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.26522 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.27139 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/856513782' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/693351407' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3141273853' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2028344608' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17577 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:23:16
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', '.nfs', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:23:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:23:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27163 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26546 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 10 10:23:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129244223' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17589 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:23:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:23:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 10 10:23:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917452810' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:17 compute-0 nova_compute[261329]: 2025-10-10 10:23:17.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:17.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:17 compute-0 ceph-mon[73551]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.17565 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.26540 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.17577 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4129244223' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1989748693' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3917452810' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17607 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:17] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:23:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:17] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:23:17 compute-0 ovs-appctl[289967]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:17 compute-0 ovs-appctl[289973]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26570 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ovs-appctl[290007]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:17.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17622 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26579 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:18 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27196 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 nova_compute[261329]: 2025-10-10 10:23:18.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 10 10:23:18 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781781234' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.27163 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.26546 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.17589 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1097788986' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4187206927' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.17607 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3498280741' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1781781234' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:18 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7386 writes, 32K keys, 7386 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7386 writes, 7386 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1600 writes, 7302 keys, 1600 commit groups, 1.0 writes per commit group, ingest: 11.88 MB, 0.02 MB/s
                                           Interval WAL: 1600 writes, 1600 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    126.4      0.41              0.17        19    0.021       0      0       0.0       0.0
                                             L6      1/0   13.17 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    168.3    144.4      1.60              0.66        18    0.089    100K   9968       0.0       0.0
                                            Sum      1/0   13.17 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    134.1    140.7      2.00              0.83        37    0.054    100K   9968       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6    156.1    158.7      0.50              0.21        10    0.050     33K   3076       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    168.3    144.4      1.60              0.66        18    0.089    100K   9968       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    127.4      0.40              0.17        18    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.8      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.28 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 2.0 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b2d7d9350#2 capacity: 304.00 MB usage: 24.44 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000163 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1499,23.67 MB,7.78602%) FilterBlock(38,291.48 KB,0.0936358%) IndexBlock(38,496.36 KB,0.159449%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:23:18 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27208 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 10 10:23:18 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595267682' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:18.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:19 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27217 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26597 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.26570 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.17622 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.26579 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.27196 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1339899947' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.27208 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3595267682' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/929438571' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1213327011' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17661 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:19 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26603 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:19.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:23:19 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/401893655' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27250 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.27217 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.26597 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/436430043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2054736604' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1795034779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/401893655' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 10 10:23:20 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539018218' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26633 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 sudo[291423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:21 compute-0 sudo[291423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:21 compute-0 sudo[291423]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:21 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.17661 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.26603 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.27250 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3929414897' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1781797121' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3032790570' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3539018218' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/151415482' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3519844817' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:21.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928458827' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 10 10:23:21 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1759395512' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:21.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:22 compute-0 nova_compute[261329]: 2025-10-10 10:23:22.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:22 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27298 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 10 10:23:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180821717' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.26633 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.17697 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2014367048' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2054782978' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/928458827' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1759395512' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/833543587' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3180821717' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 10 10:23:22 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316001559' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 10 10:23:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610891490' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 nova_compute[261329]: 2025-10-10 10:23:23.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:23 compute-0 podman[291641]: 2025-10-10 10:23:23.218167545 +0000 UTC m=+0.065202958 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:23:23 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26666 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17736 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.27298 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2593878606' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/314814720' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/316001559' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2474021729' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1610891490' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/579627303' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27325 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:23.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:23.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:23 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 10 10:23:23 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333192748' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:23:24 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 10 10:23:24 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129940614' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.26666 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.17736 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.27325 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1812088201' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/333192748' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1639796689' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3281277725' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2129940614' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27349 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17766 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26687 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 10 10:23:25 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018417856' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.27349 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.17766 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.26687 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1944349469' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2018417856' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3007322319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3153766169' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17793 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26699 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:25.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27382 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:25 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27391 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:26 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 10 10:23:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316168477' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.17793 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.26699 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.27382 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.17805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.27391 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3316168477' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3944079894' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2707461924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 10 10:23:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463257619' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17856 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 nova_compute[261329]: 2025-10-10 10:23:27.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26738 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:27.226Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:23:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:27.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:27] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:27] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/463257619' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3285063725' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4075409164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.17856 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.27433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.26738 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mon[73551]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:23:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:27 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:27 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394245946' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:27.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:28 compute-0 nova_compute[261329]: 2025-10-10 10:23:28.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:28 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 10 10:23:28 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702895973' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.27439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2394245946' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3733082418' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/742710764' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2702895973' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1570035963' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2080136583' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:28.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:28 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.17913 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:23:29 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26768 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 10:23:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354894680' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:29 compute-0 ceph-mon[73551]: from='client.17901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: from='client.26768 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1354894680' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:29 compute-0 systemd[1]: Starting Time & Date Service...
Oct 10 10:23:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:29.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:29 compute-0 systemd[1]: Started Time & Date Service.
Oct 10 10:23:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:29.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 10 10:23:29 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070543984' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-0 ceph-mon[73551]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:30 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2650376232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1070543984' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/344713102' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:23:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:31.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:31.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:32 compute-0 nova_compute[261329]: 2025-10-10 10:23:32.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:32 compute-0 sudo[292553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:23:32 compute-0 ceph-mon[73551]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:32 compute-0 sudo[292553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:32 compute-0 sudo[292553]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:32 compute-0 sudo[292578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:23:32 compute-0 sudo[292578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:33 compute-0 nova_compute[261329]: 2025-10-10 10:23:33.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:33 compute-0 sudo[292578]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:23:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:23:33 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:33 compute-0 sudo[292637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:23:33 compute-0 sudo[292637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:33 compute-0 sudo[292637]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:33.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:33 compute-0 sudo[292662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:23:33 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:33 compute-0 sudo[292662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:23:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:33.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.108490792 +0000 UTC m=+0.049988756 container create 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.083210681 +0000 UTC m=+0.024708675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:34 compute-0 systemd[1]: Started libpod-conmon-426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528.scope.
Oct 10 10:23:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.236882853 +0000 UTC m=+0.178380837 container init 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.245892039 +0000 UTC m=+0.187390023 container start 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.250540956 +0000 UTC m=+0.192038950 container attach 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:23:34 compute-0 wonderful_sammet[292745]: 167 167
Oct 10 10:23:34 compute-0 systemd[1]: libpod-426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528.scope: Deactivated successfully.
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.254030327 +0000 UTC m=+0.195528291 container died 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b8c55d2fd1e87e3ff35cfb3b0547b0edf7df2019efdf5c5a4c684e30d94d6a3-merged.mount: Deactivated successfully.
Oct 10 10:23:34 compute-0 podman[292728]: 2025-10-10 10:23:34.294228132 +0000 UTC m=+0.235726096 container remove 426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 10:23:34 compute-0 systemd[1]: libpod-conmon-426a8729fe63b96a31997ed93042182e2ef6edb731a7e369b4b61005ce361528.scope: Deactivated successfully.
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.46575257 +0000 UTC m=+0.042316473 container create 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 10 10:23:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:34 compute-0 systemd[1]: Started libpod-conmon-8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01.scope.
Oct 10 10:23:34 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.447703867 +0000 UTC m=+0.024267790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.587491439 +0000 UTC m=+0.164055352 container init 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.593980576 +0000 UTC m=+0.170544479 container start 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.674389175 +0000 UTC m=+0.250953078 container attach 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:23:34 compute-0 ceph-mon[73551]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:34 compute-0 nervous_villani[292786]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:23:34 compute-0 nervous_villani[292786]: --> All data devices are unavailable
Oct 10 10:23:34 compute-0 systemd[1]: libpod-8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01.scope: Deactivated successfully.
Oct 10 10:23:34 compute-0 podman[292770]: 2025-10-10 10:23:34.974918143 +0000 UTC m=+0.551482046 container died 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 10 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4061b6589688f65caab4b04abc92dccc9194b8c3cd5d82d7c39eb3b46368eda1-merged.mount: Deactivated successfully.
Oct 10 10:23:35 compute-0 podman[292770]: 2025-10-10 10:23:35.019164586 +0000 UTC m=+0.595728489 container remove 8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_villani, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:23:35 compute-0 systemd[1]: libpod-conmon-8ec2a791dd77e6bc3aab9323eda07a9552424cf5b4ef977b060a5517556a7c01.scope: Deactivated successfully.
Oct 10 10:23:35 compute-0 sudo[292662]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:35 compute-0 sudo[292813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:23:35 compute-0 sudo[292813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:35 compute-0 sudo[292813]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:35 compute-0 sudo[292838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:23:35 compute-0 sudo[292838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.592649999 +0000 UTC m=+0.040136954 container create a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 10:23:35 compute-0 systemd[1]: Started libpod-conmon-a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507.scope.
Oct 10 10:23:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.662927167 +0000 UTC m=+0.110414162 container init a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.574620337 +0000 UTC m=+0.022107312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.66930131 +0000 UTC m=+0.116788295 container start a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:23:35 compute-0 nervous_albattani[292920]: 167 167
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.674255246 +0000 UTC m=+0.121742241 container attach a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:23:35 compute-0 systemd[1]: libpod-a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507.scope: Deactivated successfully.
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.675424013 +0000 UTC m=+0.122911028 container died a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-743accfcfcbc8d971e8199e1cc689373374ede85cc7521c030584f4500b1cbbe-merged.mount: Deactivated successfully.
Oct 10 10:23:35 compute-0 podman[292903]: 2025-10-10 10:23:35.711131126 +0000 UTC m=+0.158618061 container remove a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_albattani, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:23:35 compute-0 systemd[1]: libpod-conmon-a916b1b02756fba091d9a3acdf2988f1b6a214c4e92cc1bd18e566df50d5e507.scope: Deactivated successfully.
Oct 10 10:23:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:35.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:35 compute-0 podman[292943]: 2025-10-10 10:23:35.864095365 +0000 UTC m=+0.038244113 container create d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 10:23:35 compute-0 systemd[1]: Started libpod-conmon-d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a.scope.
Oct 10 10:23:35 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b9c52e91f313e1eea1a0390f289949959f9a84cc32351b2ec750977ff3047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b9c52e91f313e1eea1a0390f289949959f9a84cc32351b2ec750977ff3047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b9c52e91f313e1eea1a0390f289949959f9a84cc32351b2ec750977ff3047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b9c52e91f313e1eea1a0390f289949959f9a84cc32351b2ec750977ff3047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:35 compute-0 podman[292943]: 2025-10-10 10:23:35.848021766 +0000 UTC m=+0.022170534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:35 compute-0 podman[292943]: 2025-10-10 10:23:35.959538661 +0000 UTC m=+0.133687419 container init d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 10:23:35 compute-0 podman[292943]: 2025-10-10 10:23:35.965686877 +0000 UTC m=+0.139835645 container start d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:23:35 compute-0 podman[292943]: 2025-10-10 10:23:35.971182961 +0000 UTC m=+0.145331739 container attach d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:23:36 compute-0 sad_moser[292960]: {
Oct 10 10:23:36 compute-0 sad_moser[292960]:     "0": [
Oct 10 10:23:36 compute-0 sad_moser[292960]:         {
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "devices": [
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "/dev/loop3"
Oct 10 10:23:36 compute-0 sad_moser[292960]:             ],
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "lv_name": "ceph_lv0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "lv_size": "21470642176",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "name": "ceph_lv0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "tags": {
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.cluster_name": "ceph",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.crush_device_class": "",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.encrypted": "0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.osd_id": "0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.type": "block",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.vdo": "0",
Oct 10 10:23:36 compute-0 sad_moser[292960]:                 "ceph.with_tpm": "0"
Oct 10 10:23:36 compute-0 sad_moser[292960]:             },
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "type": "block",
Oct 10 10:23:36 compute-0 sad_moser[292960]:             "vg_name": "ceph_vg0"
Oct 10 10:23:36 compute-0 sad_moser[292960]:         }
Oct 10 10:23:36 compute-0 sad_moser[292960]:     ]
Oct 10 10:23:36 compute-0 sad_moser[292960]: }
Oct 10 10:23:36 compute-0 systemd[1]: libpod-d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a.scope: Deactivated successfully.
Oct 10 10:23:36 compute-0 podman[292943]: 2025-10-10 10:23:36.270635275 +0000 UTC m=+0.444784023 container died d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-045b9c52e91f313e1eea1a0390f289949959f9a84cc32351b2ec750977ff3047-merged.mount: Deactivated successfully.
Oct 10 10:23:36 compute-0 podman[292943]: 2025-10-10 10:23:36.335697028 +0000 UTC m=+0.509845776 container remove d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_moser, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:23:36 compute-0 systemd[1]: libpod-conmon-d7dab4520c03394566e0655c0c7f465db4dd360b414060910014c49a4f79b41a.scope: Deactivated successfully.
Oct 10 10:23:36 compute-0 sudo[292838]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:36 compute-0 sudo[292984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:23:36 compute-0 sudo[292984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:36 compute-0 sudo[292984]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:36 compute-0 sudo[293009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:23:36 compute-0 sudo[293009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:36 compute-0 ceph-mon[73551]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.890750306 +0000 UTC m=+0.037514450 container create bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 10 10:23:36 compute-0 systemd[1]: Started libpod-conmon-bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888.scope.
Oct 10 10:23:36 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.968113729 +0000 UTC m=+0.114877913 container init bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.874648116 +0000 UTC m=+0.021412280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.975617147 +0000 UTC m=+0.122381291 container start bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.978749796 +0000 UTC m=+0.125513980 container attach bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:23:36 compute-0 heuristic_gould[293091]: 167 167
Oct 10 10:23:36 compute-0 systemd[1]: libpod-bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888.scope: Deactivated successfully.
Oct 10 10:23:36 compute-0 podman[293074]: 2025-10-10 10:23:36.982148544 +0000 UTC m=+0.128912758 container died bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7375d699529bed514fdfebf5c3ca2931eff149442ea91355454ea07fff85c0-merged.mount: Deactivated successfully.
Oct 10 10:23:37 compute-0 podman[293074]: 2025-10-10 10:23:37.032130059 +0000 UTC m=+0.178894213 container remove bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 10:23:37 compute-0 systemd[1]: libpod-conmon-bcc3b733dfc61d21a1cf7598939f697d471a233bab595bbda465c77b81d03888.scope: Deactivated successfully.
Oct 10 10:23:37 compute-0 nova_compute[261329]: 2025-10-10 10:23:37.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:37 compute-0 podman[293117]: 2025-10-10 10:23:37.210288567 +0000 UTC m=+0.043819160 container create bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:23:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:37 compute-0 systemd[1]: Started libpod-conmon-bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064.scope.
Oct 10 10:23:37 compute-0 podman[293117]: 2025-10-10 10:23:37.19239035 +0000 UTC m=+0.025920973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:23:37 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f223b58ae0a14f86c7db666e425f45042c1f672bc71e2cfa45e337785b2496/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f223b58ae0a14f86c7db666e425f45042c1f672bc71e2cfa45e337785b2496/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f223b58ae0a14f86c7db666e425f45042c1f672bc71e2cfa45e337785b2496/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f223b58ae0a14f86c7db666e425f45042c1f672bc71e2cfa45e337785b2496/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:23:37 compute-0 podman[293117]: 2025-10-10 10:23:37.311976992 +0000 UTC m=+0.145507605 container init bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:23:37 compute-0 podman[293117]: 2025-10-10 10:23:37.323010691 +0000 UTC m=+0.156541274 container start bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:23:37 compute-0 podman[293117]: 2025-10-10 10:23:37.326314706 +0000 UTC m=+0.159845349 container attach bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 10:23:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:37.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:37.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:38 compute-0 lvm[293207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:23:38 compute-0 lvm[293207]: VG ceph_vg0 finished
Oct 10 10:23:38 compute-0 sleepy_wing[293133]: {}
Oct 10 10:23:38 compute-0 systemd[1]: libpod-bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064.scope: Deactivated successfully.
Oct 10 10:23:38 compute-0 podman[293117]: 2025-10-10 10:23:38.113461934 +0000 UTC m=+0.946992547 container died bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:23:38 compute-0 systemd[1]: libpod-bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064.scope: Consumed 1.176s CPU time.
Oct 10 10:23:38 compute-0 nova_compute[261329]: 2025-10-10 10:23:38.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-19f223b58ae0a14f86c7db666e425f45042c1f672bc71e2cfa45e337785b2496-merged.mount: Deactivated successfully.
Oct 10 10:23:38 compute-0 podman[293117]: 2025-10-10 10:23:38.163027835 +0000 UTC m=+0.996558438 container remove bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:23:38 compute-0 systemd[1]: libpod-conmon-bd8f1e40ac902c601a66a25312d0868195af212f59c80db31f3596e416b23064.scope: Deactivated successfully.
Oct 10 10:23:38 compute-0 sudo[293009]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:23:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:38 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:23:38 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:38 compute-0 sudo[293226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:23:38 compute-0 sudo[293226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:38 compute-0 sudo[293226]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:38.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:39 compute-0 ceph-mon[73551]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:39 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:39.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:39.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:41 compute-0 podman[293255]: 2025-10-10 10:23:41.219269185 +0000 UTC m=+0.061191981 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:23:41 compute-0 podman[293254]: 2025-10-10 10:23:41.258151308 +0000 UTC m=+0.101443407 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:23:41 compute-0 podman[293256]: 2025-10-10 10:23:41.26198643 +0000 UTC m=+0.101850161 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:23:41 compute-0 ceph-mon[73551]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:41 compute-0 sudo[293322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:41 compute-0 sudo[293322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:41 compute-0 sudo[293322]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:41.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:41.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:23:41.911 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:23:41.911 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:23:41.911 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:42 compute-0 nova_compute[261329]: 2025-10-10 10:23:42.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:43 compute-0 nova_compute[261329]: 2025-10-10 10:23:43.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:43 compute-0 ceph-mon[73551]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:45 compute-0 ceph-mon[73551]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:45.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:23:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:23:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:23:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:47 compute-0 nova_compute[261329]: 2025-10-10 10:23:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:47.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:23:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:47.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:47 compute-0 ceph-mon[73551]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:48 compute-0 nova_compute[261329]: 2025-10-10 10:23:48.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:48 compute-0 ceph-mon[73551]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:49.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:50 compute-0 ceph-mon[73551]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:51.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:52 compute-0 nova_compute[261329]: 2025-10-10 10:23:52.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:52 compute-0 ceph-mon[73551]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:53 compute-0 nova_compute[261329]: 2025-10-10 10:23:53.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:54 compute-0 podman[293360]: 2025-10-10 10:23:54.231113956 +0000 UTC m=+0.083287971 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 10:23:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:54 compute-0 ceph-mon[73551]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:23:56 compute-0 ceph-mon[73551]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:57 compute-0 nova_compute[261329]: 2025-10-10 10:23:57.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:57.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:23:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:57.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:23:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:23:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:58 compute-0 nova_compute[261329]: 2025-10-10 10:23:58.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:58 compute-0 nova_compute[261329]: 2025-10-10 10:23:58.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:58 compute-0 nova_compute[261329]: 2025-10-10 10:23:58.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:23:58 compute-0 nova_compute[261329]: 2025-10-10 10:23:58.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:23:58 compute-0 nova_compute[261329]: 2025-10-10 10:23:58.253 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:23:58 compute-0 ceph-mon[73551]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:23:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:23:59 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-0[78973]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 10 10:23:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:59 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 10:23:59 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 10:23:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:23:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:23:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:00 compute-0 ceph-mon[73551]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:01 compute-0 nova_compute[261329]: 2025-10-10 10:24:01.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:01 compute-0 nova_compute[261329]: 2025-10-10 10:24:01.239 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:24:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:01 compute-0 sudo[293390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:01 compute-0 sudo[293390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:01 compute-0 sudo[293390]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:24:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/740909772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:01.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/740909772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:02 compute-0 nova_compute[261329]: 2025-10-10 10:24:02.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:02 compute-0 nova_compute[261329]: 2025-10-10 10:24:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-0 nova_compute[261329]: 2025-10-10 10:24:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-0 nova_compute[261329]: 2025-10-10 10:24:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-0 ceph-mon[73551]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3076253319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:03 compute-0 nova_compute[261329]: 2025-10-10 10:24:03.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:03.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:24:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.263 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.264 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.264 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.264 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.265 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:24:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:24:04 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898300847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.745 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:24:04 compute-0 ceph-mon[73551]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1898300847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.916 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.917 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4377MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.918 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.918 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.977 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.977 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:24:04 compute-0 nova_compute[261329]: 2025-10-10 10:24:04.990 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:24:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:24:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544635714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:05 compute-0 nova_compute[261329]: 2025-10-10 10:24:05.416 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:24:05 compute-0 nova_compute[261329]: 2025-10-10 10:24:05.422 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:24:05 compute-0 nova_compute[261329]: 2025-10-10 10:24:05.440 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:24:05 compute-0 nova_compute[261329]: 2025-10-10 10:24:05.442 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:24:05 compute-0 nova_compute[261329]: 2025-10-10 10:24:05.442 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:05.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:05.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:05 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2544635714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:06 compute-0 ceph-mon[73551]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:07 compute-0 nova_compute[261329]: 2025-10-10 10:24:07.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:07.231Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:24:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:07.232Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:24:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:07.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:24:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:24:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:07.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:07.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:08 compute-0 nova_compute[261329]: 2025-10-10 10:24:08.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:08 compute-0 nova_compute[261329]: 2025-10-10 10:24:08.439 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:08.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:08 compute-0 ceph-mon[73551]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:09.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:09.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1433281411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:11 compute-0 ceph-mon[73551]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1672674889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:11.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:11.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:12 compute-0 nova_compute[261329]: 2025-10-10 10:24:12.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:12 compute-0 podman[293470]: 2025-10-10 10:24:12.235264763 +0000 UTC m=+0.077427846 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 10:24:12 compute-0 podman[293471]: 2025-10-10 10:24:12.24807474 +0000 UTC m=+0.090375427 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:24:12 compute-0 podman[293472]: 2025-10-10 10:24:12.261117833 +0000 UTC m=+0.098908436 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:24:13 compute-0 ceph-mon[73551]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:13 compute-0 nova_compute[261329]: 2025-10-10 10:24:13.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:13.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:13.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:15 compute-0 ceph-mon[73551]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:15.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:15.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:24:16
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:24:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:24:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:24:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:24:17 compute-0 nova_compute[261329]: 2025-10-10 10:24:17.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:17 compute-0 ceph-mon[73551]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:17.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:24:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:24:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:18 compute-0 nova_compute[261329]: 2025-10-10 10:24:18.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:19 compute-0 ceph-mon[73551]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 10:24:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:19.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 10:24:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:19.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:21 compute-0 ceph-mon[73551]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:21 compute-0 sudo[293542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:21 compute-0 sudo[293542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:21 compute-0 sudo[293542]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:21.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:21.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:22 compute-0 nova_compute[261329]: 2025-10-10 10:24:22.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:23 compute-0 nova_compute[261329]: 2025-10-10 10:24:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:23 compute-0 sudo[285384]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:23 compute-0 sshd-session[285383]: Received disconnect from 192.168.122.10 port 41266:11: disconnected by user
Oct 10 10:24:23 compute-0 sshd-session[285383]: Disconnected from user zuul 192.168.122.10 port 41266
Oct 10 10:24:23 compute-0 sshd-session[285380]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:23 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 10 10:24:23 compute-0 systemd[1]: session-58.scope: Consumed 2min 54.465s CPU time, 794.3M memory peak, read 309.1M from disk, written 99.7M to disk.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Session 58 logged out. Waiting for processes to exit.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Removed session 58.
Oct 10 10:24:23 compute-0 sshd-session[293569]: Accepted publickey for zuul from 192.168.122.10 port 49710 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:24:23 compute-0 systemd-logind[806]: New session 59 of user zuul.
Oct 10 10:24:23 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 10 10:24:23 compute-0 ceph-mon[73551]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:23 compute-0 sshd-session[293569]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:24:23 compute-0 sudo[293573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-10-gyeaieg.tar.xz
Oct 10 10:24:23 compute-0 sudo[293573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:24:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:23 compute-0 sudo[293573]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:23 compute-0 sshd-session[293572]: Received disconnect from 192.168.122.10 port 49710:11: disconnected by user
Oct 10 10:24:23 compute-0 sshd-session[293572]: Disconnected from user zuul 192.168.122.10 port 49710
Oct 10 10:24:23 compute-0 sshd-session[293569]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:23 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Session 59 logged out. Waiting for processes to exit.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Removed session 59.
Oct 10 10:24:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:23.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:23 compute-0 sshd-session[293598]: Accepted publickey for zuul from 192.168.122.10 port 47286 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:24:23 compute-0 systemd-logind[806]: New session 60 of user zuul.
Oct 10 10:24:23 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 10 10:24:23 compute-0 sshd-session[293598]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:24:23 compute-0 sudo[293602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 10 10:24:23 compute-0 sudo[293602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:24:23 compute-0 sudo[293602]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:23 compute-0 sshd-session[293601]: Received disconnect from 192.168.122.10 port 47286:11: disconnected by user
Oct 10 10:24:23 compute-0 sshd-session[293601]: Disconnected from user zuul 192.168.122.10 port 47286
Oct 10 10:24:23 compute-0 sshd-session[293598]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:23 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Session 60 logged out. Waiting for processes to exit.
Oct 10 10:24:23 compute-0 systemd-logind[806]: Removed session 60.
Oct 10 10:24:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:24:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:23.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:24:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:25 compute-0 podman[293629]: 2025-10-10 10:24:25.243373626 +0000 UTC m=+0.080701559 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:24:25 compute-0 ceph-mon[73551]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:25.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:27 compute-0 nova_compute[261329]: 2025-10-10 10:24:27.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:27.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:24:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:27.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:24:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:24:27 compute-0 ceph-mon[73551]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2262894800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:24:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/2262894800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:24:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:28 compute-0 nova_compute[261329]: 2025-10-10 10:24:28.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:24:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:24:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:24:29 compute-0 ceph-mon[73551]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:29.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:24:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:31 compute-0 ceph-mon[73551]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:31.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:32 compute-0 nova_compute[261329]: 2025-10-10 10:24:32.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:33 compute-0 nova_compute[261329]: 2025-10-10 10:24:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:33 compute-0 ceph-mon[73551]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:34 compute-0 ceph-mon[73551]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:36 compute-0 ceph-mon[73551]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:37 compute-0 nova_compute[261329]: 2025-10-10 10:24:37.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:37.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:37] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:24:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:37] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:24:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:24:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:24:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:37.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:38 compute-0 nova_compute[261329]: 2025-10-10 10:24:38.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:38 compute-0 ceph-mon[73551]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:38 compute-0 sudo[293662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:38 compute-0 sudo[293662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:38 compute-0 sudo[293662]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:38 compute-0 sudo[293687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:24:38 compute-0 sudo[293687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:38.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:39 compute-0 sudo[293687]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:24:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:24:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:39 compute-0 sudo[293732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:39 compute-0 sudo[293732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:39 compute-0 sudo[293732]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:24:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:24:39 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:39 compute-0 sudo[293757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:24:39 compute-0 sudo[293757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:39 compute-0 sudo[293757]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:39.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:39.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:24:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:24:40 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:24:40 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:40 compute-0 sudo[293813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:40 compute-0 sudo[293813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:40 compute-0 sudo[293813]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:40 compute-0 sudo[293839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:24:40 compute-0 sudo[293839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.660250943 +0000 UTC m=+0.042789315 container create fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:24:40 compute-0 systemd[1]: Started libpod-conmon-fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188.scope.
Oct 10 10:24:40 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.643295502 +0000 UTC m=+0.025833904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.74544778 +0000 UTC m=+0.127986172 container init fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.752515175 +0000 UTC m=+0.135053537 container start fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.755616294 +0000 UTC m=+0.138154676 container attach fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:24:40 compute-0 admiring_bose[293919]: 167 167
Oct 10 10:24:40 compute-0 systemd[1]: libpod-fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188.scope: Deactivated successfully.
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.7601937 +0000 UTC m=+0.142732142 container died fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d7a45c16cb85afc9ae873cc36482b81bb64cf57a3d8f30c003b7273ed6e63f-merged.mount: Deactivated successfully.
Oct 10 10:24:40 compute-0 podman[293903]: 2025-10-10 10:24:40.809604546 +0000 UTC m=+0.192142908 container remove fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_bose, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:24:40 compute-0 systemd[1]: libpod-conmon-fdaa2b4001056376193c7133ba155aa5e658d72ef50407ebd901f9a7a6dc9188.scope: Deactivated successfully.
Oct 10 10:24:40 compute-0 podman[293943]: 2025-10-10 10:24:40.978292354 +0000 UTC m=+0.044272172 container create 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 10 10:24:41 compute-0 systemd[1]: Started libpod-conmon-246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd.scope.
Oct 10 10:24:41 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:40.962623915 +0000 UTC m=+0.028603743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:41.057074276 +0000 UTC m=+0.123054134 container init 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:41.067468938 +0000 UTC m=+0.133448746 container start 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:24:41 compute-0 ceph-mon[73551]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:24:41 compute-0 ceph-mon[73551]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:24:41 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:41.071008321 +0000 UTC m=+0.136988189 container attach 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:24:41 compute-0 hardcore_dirac[293959]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:24:41 compute-0 hardcore_dirac[293959]: --> All data devices are unavailable
Oct 10 10:24:41 compute-0 systemd[1]: libpod-246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd.scope: Deactivated successfully.
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:41.420802744 +0000 UTC m=+0.486782572 container died 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0dc701d14dd5fbb322c947356d71b7b5340fc2db11ef1c7dc6d6d43458f0a1e-merged.mount: Deactivated successfully.
Oct 10 10:24:41 compute-0 podman[293943]: 2025-10-10 10:24:41.463028031 +0000 UTC m=+0.529007849 container remove 246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 10:24:41 compute-0 systemd[1]: libpod-conmon-246ad5273321058c14878fe8d54c9c59ad5ad30bfa53a4bc6822b03d68b209dd.scope: Deactivated successfully.
Oct 10 10:24:41 compute-0 sudo[293839]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:41 compute-0 sudo[293986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:41 compute-0 sudo[293986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:41 compute-0 sudo[293986]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:41 compute-0 sudo[294011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:24:41 compute-0 sudo[294011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:41 compute-0 sudo[294031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:41 compute-0 sudo[294031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:41 compute-0 sudo[294031]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:41.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:24:41.912 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:24:41.912 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:24:41.913 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:41.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.005824289 +0000 UTC m=+0.040107620 container create 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:24:42 compute-0 systemd[1]: Started libpod-conmon-09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52.scope.
Oct 10 10:24:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.080059646 +0000 UTC m=+0.114343007 container init 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:41.987388281 +0000 UTC m=+0.021671632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.086073158 +0000 UTC m=+0.120356489 container start 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.089555859 +0000 UTC m=+0.123839220 container attach 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:42 compute-0 peaceful_goldstine[294116]: 167 167
Oct 10 10:24:42 compute-0 systemd[1]: libpod-09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52.scope: Deactivated successfully.
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.091267074 +0000 UTC m=+0.125550405 container died 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-578ed46396c03a61740c6ea8748e77f3b8f66b96ac33c6287c9aad9511557a17-merged.mount: Deactivated successfully.
Oct 10 10:24:42 compute-0 podman[294099]: 2025-10-10 10:24:42.132091745 +0000 UTC m=+0.166375076 container remove 09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_goldstine, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:24:42 compute-0 systemd[1]: libpod-conmon-09d137dbb72a310396b3bffeed5d769f299e3ffb2df4b27be14f1828655b2d52.scope: Deactivated successfully.
Oct 10 10:24:42 compute-0 nova_compute[261329]: 2025-10-10 10:24:42.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.303510381 +0000 UTC m=+0.036857446 container create 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:24:42 compute-0 systemd[1]: Started libpod-conmon-720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8.scope.
Oct 10 10:24:42 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70ea4a34bc28010c675b9b0c8273ad17c9d7da64179ee5d2fdfeafa0f8751f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70ea4a34bc28010c675b9b0c8273ad17c9d7da64179ee5d2fdfeafa0f8751f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70ea4a34bc28010c675b9b0c8273ad17c9d7da64179ee5d2fdfeafa0f8751f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70ea4a34bc28010c675b9b0c8273ad17c9d7da64179ee5d2fdfeafa0f8751f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.380765335 +0000 UTC m=+0.114112400 container init 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.288772391 +0000 UTC m=+0.022119476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.38876431 +0000 UTC m=+0.122111365 container start 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.395842875 +0000 UTC m=+0.129189940 container attach 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:24:42 compute-0 podman[294156]: 2025-10-10 10:24:42.414453768 +0000 UTC m=+0.076064845 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:24:42 compute-0 podman[294153]: 2025-10-10 10:24:42.42987494 +0000 UTC m=+0.091251920 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:24:42 compute-0 podman[294157]: 2025-10-10 10:24:42.443154884 +0000 UTC m=+0.102739847 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:24:42 compute-0 quizzical_saha[294158]: {
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:     "0": [
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:         {
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "devices": [
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "/dev/loop3"
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             ],
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "lv_name": "ceph_lv0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "lv_size": "21470642176",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "name": "ceph_lv0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "tags": {
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.cluster_name": "ceph",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.crush_device_class": "",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.encrypted": "0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.osd_id": "0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.type": "block",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.vdo": "0",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:                 "ceph.with_tpm": "0"
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             },
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "type": "block",
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:             "vg_name": "ceph_vg0"
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:         }
Oct 10 10:24:42 compute-0 quizzical_saha[294158]:     ]
Oct 10 10:24:42 compute-0 quizzical_saha[294158]: }
Oct 10 10:24:42 compute-0 systemd[1]: libpod-720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8.scope: Deactivated successfully.
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.672310951 +0000 UTC m=+0.405658016 container died 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70ea4a34bc28010c675b9b0c8273ad17c9d7da64179ee5d2fdfeafa0f8751f0-merged.mount: Deactivated successfully.
Oct 10 10:24:42 compute-0 podman[294139]: 2025-10-10 10:24:42.721423837 +0000 UTC m=+0.454770902 container remove 720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_saha, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:24:42 compute-0 systemd[1]: libpod-conmon-720f6b7a3ef1de26d8830831631ccdf0581c144eb8c1b7c90c23e89d22dd06b8.scope: Deactivated successfully.
Oct 10 10:24:42 compute-0 sudo[294011]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:42 compute-0 sudo[294242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:42 compute-0 sudo[294242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:42 compute-0 sudo[294242]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:42 compute-0 sudo[294267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:24:42 compute-0 sudo[294267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:43 compute-0 ceph-mon[73551]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:43 compute-0 nova_compute[261329]: 2025-10-10 10:24:43.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.307552296 +0000 UTC m=+0.062152102 container create 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:24:43 compute-0 systemd[1]: Started libpod-conmon-63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6.scope.
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.277822639 +0000 UTC m=+0.032422545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.402865365 +0000 UTC m=+0.157465201 container init 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.408673191 +0000 UTC m=+0.163272997 container start 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:24:43 compute-0 gallant_pare[294352]: 167 167
Oct 10 10:24:43 compute-0 systemd[1]: libpod-63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6.scope: Deactivated successfully.
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.421873462 +0000 UTC m=+0.176473268 container attach 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.422250193 +0000 UTC m=+0.176849999 container died 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffcf039031f4470dc92376434064886cd170d7fa98d49afd64e1f10768a58ef-merged.mount: Deactivated successfully.
Oct 10 10:24:43 compute-0 podman[294335]: 2025-10-10 10:24:43.464663336 +0000 UTC m=+0.219263142 container remove 63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 10 10:24:43 compute-0 systemd[1]: libpod-conmon-63cc0729bcfa05d22e7e44ee7a6360aeb98bcb389a32a09145b88f21a36008e6.scope: Deactivated successfully.
Oct 10 10:24:43 compute-0 podman[294378]: 2025-10-10 10:24:43.636585598 +0000 UTC m=+0.052155483 container create eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 10:24:43 compute-0 systemd[1]: Started libpod-conmon-eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69.scope.
Oct 10 10:24:43 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5309dda1157a7f6ea69776915d5944b7fb34cd670ebda8973646dd365b8c310e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5309dda1157a7f6ea69776915d5944b7fb34cd670ebda8973646dd365b8c310e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5309dda1157a7f6ea69776915d5944b7fb34cd670ebda8973646dd365b8c310e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5309dda1157a7f6ea69776915d5944b7fb34cd670ebda8973646dd365b8c310e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:24:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:43 compute-0 podman[294378]: 2025-10-10 10:24:43.618040217 +0000 UTC m=+0.033610132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:24:43 compute-0 podman[294378]: 2025-10-10 10:24:43.719496422 +0000 UTC m=+0.135066367 container init eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:24:43 compute-0 podman[294378]: 2025-10-10 10:24:43.730300646 +0000 UTC m=+0.145870531 container start eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:24:43 compute-0 podman[294378]: 2025-10-10 10:24:43.735801032 +0000 UTC m=+0.151370917 container attach eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 10:24:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:24:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:43.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:24:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:44 compute-0 lvm[294470]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:24:44 compute-0 lvm[294470]: VG ceph_vg0 finished
Oct 10 10:24:44 compute-0 angry_kepler[294395]: {}
Oct 10 10:24:44 compute-0 systemd[1]: libpod-eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69.scope: Deactivated successfully.
Oct 10 10:24:44 compute-0 podman[294378]: 2025-10-10 10:24:44.414641918 +0000 UTC m=+0.830211803 container died eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:44 compute-0 systemd[1]: libpod-eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69.scope: Consumed 1.028s CPU time.
Oct 10 10:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5309dda1157a7f6ea69776915d5944b7fb34cd670ebda8973646dd365b8c310e-merged.mount: Deactivated successfully.
Oct 10 10:24:44 compute-0 podman[294378]: 2025-10-10 10:24:44.466356517 +0000 UTC m=+0.881926402 container remove eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:24:44 compute-0 systemd[1]: libpod-conmon-eff166ebf070bec5d0ddc0be03eaaa5d4a8de3e65f085e60ba1b707d3275db69.scope: Deactivated successfully.
Oct 10 10:24:44 compute-0 sudo[294267]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:24:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:24:44 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:44 compute-0 sudo[294484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:24:44 compute-0 sudo[294484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:44 compute-0 sudo[294484]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:45 compute-0 ceph-mon[73551]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:45.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:24:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:24:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:24:47 compute-0 ceph-mon[73551]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:47 compute-0 nova_compute[261329]: 2025-10-10 10:24:47.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:47.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:24:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:47.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:24:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:24:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:48 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:48 compute-0 nova_compute[261329]: 2025-10-10 10:24:48.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:48.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:49 compute-0 ceph-mon[73551]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:50 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:51 compute-0 ceph-mon[73551]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:51.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:24:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:24:52 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:52 compute-0 nova_compute[261329]: 2025-10-10 10:24:52.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:53 compute-0 nova_compute[261329]: 2025-10-10 10:24:53.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:53 compute-0 ceph-mon[73551]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:24:53 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 2984 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1517 writes, 5172 keys, 1517 commit groups, 1.0 writes per commit group, ingest: 5.56 MB, 0.01 MB/s
                                           Interval WAL: 1517 writes, 634 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:24:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:53.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:53.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:54 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.199842) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894199886, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2250, "num_deletes": 508, "total_data_size": 3357588, "memory_usage": 3413552, "flush_reason": "Manual Compaction"}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894223211, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 3255631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32333, "largest_seqno": 34582, "table_properties": {"data_size": 3245062, "index_size": 5975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 29579, "raw_average_key_size": 21, "raw_value_size": 3220417, "raw_average_value_size": 2305, "num_data_blocks": 256, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091750, "oldest_key_time": 1760091750, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 23418 microseconds, and 8732 cpu microseconds.
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.223261) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 3255631 bytes OK
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.223281) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.225021) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.225038) EVENT_LOG_v1 {"time_micros": 1760091894225033, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.225056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3346190, prev total WAL file size 3346190, number of live WAL files 2.
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.226083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(3179KB)], [71(13MB)]
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894226122, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17070336, "oldest_snapshot_seqno": -1}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6495 keys, 14858746 bytes, temperature: kUnknown
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894304423, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14858746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14814789, "index_size": 26631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 170740, "raw_average_key_size": 26, "raw_value_size": 14697128, "raw_average_value_size": 2262, "num_data_blocks": 1053, "num_entries": 6495, "num_filter_entries": 6495, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.304643) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14858746 bytes
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.306242) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.8 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 13.2 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(9.8) write-amplify(4.6) OK, records in: 7528, records dropped: 1033 output_compression: NoCompression
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.306258) EVENT_LOG_v1 {"time_micros": 1760091894306250, "job": 40, "event": "compaction_finished", "compaction_time_micros": 78366, "compaction_time_cpu_micros": 37187, "output_level": 6, "num_output_files": 1, "total_output_size": 14858746, "num_input_records": 7528, "num_output_records": 6495, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894306855, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894309243, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.226034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.309348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.309363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.309366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.309368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:24:54.309370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:55 compute-0 ceph-mon[73551]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:55.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:24:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:55.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:24:56 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:56 compute-0 podman[294520]: 2025-10-10 10:24:56.223253995 +0000 UTC m=+0.064746996 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:24:57 compute-0 ceph-mon[73551]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:57 compute-0 nova_compute[261329]: 2025-10-10 10:24:57.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:57.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:24:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:24:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:24:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:57.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:57.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:58 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:58 compute-0 nova_compute[261329]: 2025-10-10 10:24:58.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:24:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:24:59 compute-0 ceph-mon[73551]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:59 compute-0 nova_compute[261329]: 2025-10-10 10:24:59.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:59 compute-0 nova_compute[261329]: 2025-10-10 10:24:59.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:24:59 compute-0 nova_compute[261329]: 2025-10-10 10:24:59.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:24:59 compute-0 nova_compute[261329]: 2025-10-10 10:24:59.261 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:24:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:59.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:24:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:59.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:00 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:01 compute-0 ceph-mon[73551]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:01 compute-0 nova_compute[261329]: 2025-10-10 10:25:01.257 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:25:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:01.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:01 compute-0 sudo[294546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:01 compute-0 sudo[294546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:01 compute-0 sudo[294546]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:02 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:02 compute-0 nova_compute[261329]: 2025-10-10 10:25:02.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:02 compute-0 nova_compute[261329]: 2025-10-10 10:25:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3269636147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1111097345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:03 compute-0 nova_compute[261329]: 2025-10-10 10:25:03.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:03 compute-0 ceph-mon[73551]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:03.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:03.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:04 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-0 nova_compute[261329]: 2025-10-10 10:25:04.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:25:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.270 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.270 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.271 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.271 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.271 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:25:05 compute-0 ceph-mon[73551]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:05 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:25:05 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184273382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.713 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:25:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:05.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.873 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.875 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.875 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.875 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.947 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.947 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:25:05 compute-0 nova_compute[261329]: 2025-10-10 10:25:05.973 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:25:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:06 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1184273382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:06 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:25:06 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882845651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:06 compute-0 nova_compute[261329]: 2025-10-10 10:25:06.420 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:25:06 compute-0 nova_compute[261329]: 2025-10-10 10:25:06.425 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:25:06 compute-0 nova_compute[261329]: 2025-10-10 10:25:06.442 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:25:06 compute-0 nova_compute[261329]: 2025-10-10 10:25:06.443 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:25:06 compute-0 nova_compute[261329]: 2025-10-10 10:25:06.444 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:07 compute-0 nova_compute[261329]: 2025-10-10 10:25:07.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:07.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:25:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:07.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:07 compute-0 ceph-mon[73551]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3882845651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:25:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:25:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:07.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:07.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:08 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:08 compute-0 nova_compute[261329]: 2025-10-10 10:25:08.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:09 compute-0 ceph-mon[73551]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:09.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:10 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:11 compute-0 ceph-mon[73551]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:11.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:25:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:11.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:25:12 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:12 compute-0 nova_compute[261329]: 2025-10-10 10:25:12.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3659408900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:13 compute-0 nova_compute[261329]: 2025-10-10 10:25:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:13 compute-0 podman[294627]: 2025-10-10 10:25:13.231306656 +0000 UTC m=+0.077801762 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:25:13 compute-0 podman[294628]: 2025-10-10 10:25:13.249182925 +0000 UTC m=+0.094136832 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 10 10:25:13 compute-0 podman[294629]: 2025-10-10 10:25:13.261193378 +0000 UTC m=+0.099406570 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:25:13 compute-0 ceph-mon[73551]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1732526091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:25:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:13.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:25:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:14 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:15 compute-0 ceph-mon[73551]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:15.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:16.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:25:16
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:25:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:25:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:25:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:25:17 compute-0 nova_compute[261329]: 2025-10-10 10:25:17.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:17.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:25:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:25:17 compute-0 ceph-mon[73551]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:17.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:18.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:18 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:18 compute-0 nova_compute[261329]: 2025-10-10 10:25:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:19 compute-0 ceph-mon[73551]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:19.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:20.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:20 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:21 compute-0 ceph-mon[73551]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:21 compute-0 sudo[294699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:21 compute-0 sudo[294699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:21 compute-0 sudo[294699]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:22.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:22 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:22 compute-0 nova_compute[261329]: 2025-10-10 10:25:22.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:23 compute-0 nova_compute[261329]: 2025-10-10 10:25:23.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:23 compute-0 ceph-mon[73551]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:23.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:24.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:24 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:25 compute-0 ceph-mon[73551]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:25.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:25:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:26.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:25:26 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:26 compute-0 ceph-mon[73551]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:27 compute-0 podman[294730]: 2025-10-10 10:25:27.200014003 +0000 UTC m=+0.053534788 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 10:25:27 compute-0 nova_compute[261329]: 2025-10-10 10:25:27.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:27.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:25:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:25:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:27.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:25:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:28.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:25:28 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:28 compute-0 nova_compute[261329]: 2025-10-10 10:25:28.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:28.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:25:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:25:29 compute-0 ceph-mon[73551]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:29.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:30 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:31 compute-0 ceph-mon[73551]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:25:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:31.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:32.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:32 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:32 compute-0 nova_compute[261329]: 2025-10-10 10:25:32.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:33 compute-0 nova_compute[261329]: 2025-10-10 10:25:33.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:33 compute-0 ceph-mon[73551]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:33.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:34.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:34 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:35 compute-0 ceph-mon[73551]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:35.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:36.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:36 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:37.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:37 compute-0 nova_compute[261329]: 2025-10-10 10:25:37.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:37 compute-0 ceph-mon[73551]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:37] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:25:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:37] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Oct 10 10:25:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:38.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:38 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:38 compute-0 nova_compute[261329]: 2025-10-10 10:25:38.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:38.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:39 compute-0 ceph-mon[73551]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:40.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:40 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:41 compute-0 ceph-mon[73551]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:25:41.912 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:25:41.913 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:25:41.913 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:41 compute-0 sudo[294765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:41 compute-0 sudo[294765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:41 compute-0 sudo[294765]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:42.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:42 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:42 compute-0 nova_compute[261329]: 2025-10-10 10:25:42.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:43 compute-0 nova_compute[261329]: 2025-10-10 10:25:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:43 compute-0 ceph-mon[73551]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:44.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:44 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:44 compute-0 podman[294792]: 2025-10-10 10:25:44.237236382 +0000 UTC m=+0.073264898 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:25:44 compute-0 podman[294794]: 2025-10-10 10:25:44.25005589 +0000 UTC m=+0.089112302 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:25:44 compute-0 podman[294795]: 2025-10-10 10:25:44.270865814 +0000 UTC m=+0.103745249 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:25:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:44 compute-0 sudo[294857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:25:44 compute-0 sudo[294857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:44 compute-0 sudo[294857]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:44 compute-0 sudo[294883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:25:44 compute-0 sudo[294883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 ceph-mon[73551]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 sudo[294883]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:45.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:25:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:25:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:25:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:45 compute-0 sudo[294941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:25:45 compute-0 sudo[294941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:45 compute-0 sudo[294941]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:46 compute-0 sudo[294966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:25:46 compute-0 sudo[294966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:25:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:46.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:25:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:25:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.524184574 +0000 UTC m=+0.068718103 container create 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:25:46 compute-0 systemd[1]: Started libpod-conmon-58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c.scope.
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.496973226 +0000 UTC m=+0.041506775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:46 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.631361681 +0000 UTC m=+0.175895230 container init 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.638280612 +0000 UTC m=+0.182814121 container start 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.641620899 +0000 UTC m=+0.186154498 container attach 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:25:46 compute-0 strange_newton[295046]: 167 167
Oct 10 10:25:46 compute-0 systemd[1]: libpod-58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c.scope: Deactivated successfully.
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.646428102 +0000 UTC m=+0.190961681 container died 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 10:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0a1e5ce8b6684bdd807e183f15baf3bb19198021760c78910fc9ae29ca81b4-merged.mount: Deactivated successfully.
Oct 10 10:25:46 compute-0 podman[295029]: 2025-10-10 10:25:46.690040652 +0000 UTC m=+0.234574151 container remove 58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_newton, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Oct 10 10:25:46 compute-0 systemd[1]: libpod-conmon-58a282a1e5d6057654860ecd20a2903409c7e144fbcb6e1593554f90dad60e0c.scope: Deactivated successfully.
Oct 10 10:25:46 compute-0 podman[295072]: 2025-10-10 10:25:46.954379901 +0000 UTC m=+0.067918557 container create fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:25:46 compute-0 systemd[1]: Started libpod-conmon-fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb.scope.
Oct 10 10:25:47 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:46.92894909 +0000 UTC m=+0.042487726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:47.03182486 +0000 UTC m=+0.145363476 container init fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:47.041427927 +0000 UTC m=+0.154966543 container start fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:47.044294758 +0000 UTC m=+0.157833384 container attach fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:25:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:47.241Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:25:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:47.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:47 compute-0 nova_compute[261329]: 2025-10-10 10:25:47.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:47 compute-0 ceph-mon[73551]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:25:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:25:47 compute-0 mystifying_solomon[295088]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:25:47 compute-0 mystifying_solomon[295088]: --> All data devices are unavailable
Oct 10 10:25:47 compute-0 systemd[1]: libpod-fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb.scope: Deactivated successfully.
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:47.440706458 +0000 UTC m=+0.554245074 container died fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee7f5ad3e74ccaa52209308ca7f850b24fc508b55a576d17f93bfc64fd481660-merged.mount: Deactivated successfully.
Oct 10 10:25:47 compute-0 podman[295072]: 2025-10-10 10:25:47.490199557 +0000 UTC m=+0.603738213 container remove fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_solomon, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:25:47 compute-0 systemd[1]: libpod-conmon-fecc86f9472c6a975f93a56649582771cc4e8d0d2a391bacd0eacb7ee3eea6eb.scope: Deactivated successfully.
Oct 10 10:25:47 compute-0 sudo[294966]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:47 compute-0 sudo[295115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:25:47 compute-0 sudo[295115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:47 compute-0 sudo[295115]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:47 compute-0 sudo[295140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:25:47 compute-0 sudo[295140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:48.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.079169637 +0000 UTC m=+0.037223457 container create 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:25:48 compute-0 systemd[1]: Started libpod-conmon-5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c.scope.
Oct 10 10:25:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.143146587 +0000 UTC m=+0.101200417 container init 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.148521708 +0000 UTC m=+0.106575528 container start 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.151274026 +0000 UTC m=+0.109327846 container attach 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:25:48 compute-0 festive_austin[295223]: 167 167
Oct 10 10:25:48 compute-0 systemd[1]: libpod-5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c.scope: Deactivated successfully.
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.153974852 +0000 UTC m=+0.112028692 container died 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.063789767 +0000 UTC m=+0.021843607 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0d6070b83c8db3bafe6d7bee1fdee0a48540dc143918544b9f26a96d5c928f-merged.mount: Deactivated successfully.
Oct 10 10:25:48 compute-0 podman[295207]: 2025-10-10 10:25:48.185533468 +0000 UTC m=+0.143587288 container remove 5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 10:25:48 compute-0 nova_compute[261329]: 2025-10-10 10:25:48.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:48 compute-0 systemd[1]: libpod-conmon-5e9f3cc33a93a2eac8b8606131d2eb95a830c8ca309976b3e3af137dec72779c.scope: Deactivated successfully.
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.331553245 +0000 UTC m=+0.037869439 container create 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:25:48 compute-0 systemd[1]: Started libpod-conmon-4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4.scope.
Oct 10 10:25:48 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e105c14019ebf530179f9efe1cfe5df2c68e3de6c8b18c5ad1fe86780c37db9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e105c14019ebf530179f9efe1cfe5df2c68e3de6c8b18c5ad1fe86780c37db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e105c14019ebf530179f9efe1cfe5df2c68e3de6c8b18c5ad1fe86780c37db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e105c14019ebf530179f9efe1cfe5df2c68e3de6c8b18c5ad1fe86780c37db9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.314660416 +0000 UTC m=+0.020976630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.411942538 +0000 UTC m=+0.118258732 container init 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.417994681 +0000 UTC m=+0.124310875 container start 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.420871892 +0000 UTC m=+0.127188096 container attach 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]: {
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:     "0": [
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:         {
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "devices": [
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "/dev/loop3"
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             ],
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "lv_name": "ceph_lv0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "lv_size": "21470642176",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "name": "ceph_lv0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "tags": {
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.cluster_name": "ceph",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.crush_device_class": "",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.encrypted": "0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.osd_id": "0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.type": "block",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.vdo": "0",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:                 "ceph.with_tpm": "0"
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             },
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "type": "block",
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:             "vg_name": "ceph_vg0"
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:         }
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]:     ]
Oct 10 10:25:48 compute-0 affectionate_yonath[295266]: }
Oct 10 10:25:48 compute-0 systemd[1]: libpod-4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4.scope: Deactivated successfully.
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.723099049 +0000 UTC m=+0.429415333 container died 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 10:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e105c14019ebf530179f9efe1cfe5df2c68e3de6c8b18c5ad1fe86780c37db9-merged.mount: Deactivated successfully.
Oct 10 10:25:48 compute-0 podman[295249]: 2025-10-10 10:25:48.763263921 +0000 UTC m=+0.469580105 container remove 4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:25:48 compute-0 systemd[1]: libpod-conmon-4a58e220f89b4fed310cd1e0dfe208d4376c2364cd43fed046a37df0ce3d80c4.scope: Deactivated successfully.
Oct 10 10:25:48 compute-0 sudo[295140]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:48 compute-0 sudo[295287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:25:48 compute-0 sudo[295287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:48 compute-0 sudo[295287]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:48 compute-0 sudo[295313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:25:48 compute-0 sudo[295313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.298405375 +0000 UTC m=+0.037827608 container create 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:25:49 compute-0 systemd[1]: Started libpod-conmon-20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f.scope.
Oct 10 10:25:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.366238667 +0000 UTC m=+0.105660890 container init 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:25:49 compute-0 ceph-mon[73551]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.373351964 +0000 UTC m=+0.112774177 container start 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:25:49 compute-0 youthful_saha[295393]: 167 167
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.282132016 +0000 UTC m=+0.021554259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:49 compute-0 systemd[1]: libpod-20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f.scope: Deactivated successfully.
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.378753656 +0000 UTC m=+0.118175949 container attach 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.379034915 +0000 UTC m=+0.118457128 container died 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f1d4d8cbd028bac6d27156637b0e80dc6e2c81a6625e5334da50d3be344f34-merged.mount: Deactivated successfully.
Oct 10 10:25:49 compute-0 podman[295377]: 2025-10-10 10:25:49.411763619 +0000 UTC m=+0.151185832 container remove 20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:25:49 compute-0 systemd[1]: libpod-conmon-20912084dd8a23e9a21e95a0795d74abeb41b6173fd13787c028b774ee72a12f.scope: Deactivated successfully.
Oct 10 10:25:49 compute-0 podman[295416]: 2025-10-10 10:25:49.545137002 +0000 UTC m=+0.034175011 container create 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:25:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:49 compute-0 systemd[1]: Started libpod-conmon-2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35.scope.
Oct 10 10:25:49 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197d3761455a3f99c8d0c3929f205c6b9093b7c26b7bab5ad19d04e20ece95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197d3761455a3f99c8d0c3929f205c6b9093b7c26b7bab5ad19d04e20ece95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197d3761455a3f99c8d0c3929f205c6b9093b7c26b7bab5ad19d04e20ece95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197d3761455a3f99c8d0c3929f205c6b9093b7c26b7bab5ad19d04e20ece95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:25:49 compute-0 podman[295416]: 2025-10-10 10:25:49.531119845 +0000 UTC m=+0.020157874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:25:49 compute-0 podman[295416]: 2025-10-10 10:25:49.629520602 +0000 UTC m=+0.118558631 container init 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 10:25:49 compute-0 podman[295416]: 2025-10-10 10:25:49.639222372 +0000 UTC m=+0.128260401 container start 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:25:49 compute-0 podman[295416]: 2025-10-10 10:25:49.64260292 +0000 UTC m=+0.131640949 container attach 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:25:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:50.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:50 compute-0 lvm[295508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:25:50 compute-0 lvm[295508]: VG ceph_vg0 finished
Oct 10 10:25:50 compute-0 elated_wilbur[295433]: {}
Oct 10 10:25:50 compute-0 systemd[1]: libpod-2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35.scope: Deactivated successfully.
Oct 10 10:25:50 compute-0 systemd[1]: libpod-2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35.scope: Consumed 1.054s CPU time.
Oct 10 10:25:50 compute-0 podman[295416]: 2025-10-10 10:25:50.32533936 +0000 UTC m=+0.814377399 container died 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-09197d3761455a3f99c8d0c3929f205c6b9093b7c26b7bab5ad19d04e20ece95-merged.mount: Deactivated successfully.
Oct 10 10:25:50 compute-0 podman[295416]: 2025-10-10 10:25:50.362890016 +0000 UTC m=+0.851928015 container remove 2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:25:50 compute-0 systemd[1]: libpod-conmon-2fa6b7e3f63a817c58c8ab6226fbbd73b4fabdc68f017b092d0fd738f548ce35.scope: Deactivated successfully.
Oct 10 10:25:50 compute-0 sudo[295313]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:25:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:25:50 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:50 compute-0 sudo[295525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:25:50 compute-0 sudo[295525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:50 compute-0 sudo[295525]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:51 compute-0 ceph-mon[73551]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:51 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:51.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:52.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:52 compute-0 nova_compute[261329]: 2025-10-10 10:25:52.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:53 compute-0 nova_compute[261329]: 2025-10-10 10:25:53.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:53 compute-0 ceph-mon[73551]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:54.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:55 compute-0 ceph-mon[73551]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:56.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:57 compute-0 unix_chkpwd[295559]: password check failed for user (root)
Oct 10 10:25:57 compute-0 sshd-session[295556]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:25:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:57.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:57 compute-0 nova_compute[261329]: 2025-10-10 10:25:57.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:25:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:25:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:25:57 compute-0 ceph-mon[73551]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:58 compute-0 nova_compute[261329]: 2025-10-10 10:25:58.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:58 compute-0 podman[295560]: 2025-10-10 10:25:58.244814206 +0000 UTC m=+0.076678786 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 10 10:25:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:25:58.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:25:59 compute-0 ceph-mon[73551]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:59 compute-0 sshd-session[295556]: Failed password for root from 80.94.93.176 port 54512 ssh2
Oct 10 10:25:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:25:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:59.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:59 compute-0 unix_chkpwd[295583]: password check failed for user (root)
Oct 10 10:26:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:00 compute-0 nova_compute[261329]: 2025-10-10 10:26:00.443 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:00 compute-0 nova_compute[261329]: 2025-10-10 10:26:00.444 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:26:00 compute-0 nova_compute[261329]: 2025-10-10 10:26:00.444 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:26:00 compute-0 nova_compute[261329]: 2025-10-10 10:26:00.460 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:26:00 compute-0 ceph-mon[73551]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:26:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:01 compute-0 sshd-session[295556]: Failed password for root from 80.94.93.176 port 54512 ssh2
Oct 10 10:26:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:02.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:02 compute-0 sudo[295586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:02 compute-0 sudo[295586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:02 compute-0 sudo[295586]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:02 compute-0 nova_compute[261329]: 2025-10-10 10:26:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:02 compute-0 nova_compute[261329]: 2025-10-10 10:26:02.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:02 compute-0 nova_compute[261329]: 2025-10-10 10:26:02.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:02 compute-0 ceph-mon[73551]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:02 compute-0 unix_chkpwd[295612]: password check failed for user (root)
Oct 10 10:26:03 compute-0 nova_compute[261329]: 2025-10-10 10:26:03.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/212724700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:04.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:04 compute-0 sshd-session[295556]: Failed password for root from 80.94.93.176 port 54512 ssh2
Oct 10 10:26:04 compute-0 nova_compute[261329]: 2025-10-10 10:26:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:04 compute-0 ceph-mon[73551]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:04 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/470207897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:05 compute-0 nova_compute[261329]: 2025-10-10 10:26:05.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:05 compute-0 nova_compute[261329]: 2025-10-10 10:26:05.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:05 compute-0 sshd-session[295556]: Received disconnect from 80.94.93.176 port 54512:11:  [preauth]
Oct 10 10:26:05 compute-0 sshd-session[295556]: Disconnected from authenticating user root 80.94.93.176 port 54512 [preauth]
Oct 10 10:26:05 compute-0 sshd-session[295556]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:26:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:06.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:06 compute-0 nova_compute[261329]: 2025-10-10 10:26:06.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:06 compute-0 nova_compute[261329]: 2025-10-10 10:26:06.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:06 compute-0 nova_compute[261329]: 2025-10-10 10:26:06.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:26:06 compute-0 unix_chkpwd[295619]: password check failed for user (root)
Oct 10 10:26:06 compute-0 sshd-session[295616]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:26:06 compute-0 ceph-mon[73551]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:07.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.277 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.277 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.277 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.277 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.278 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:26:07 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647399066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.753 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:26:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.893 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.894 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4429MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.894 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.895 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3647399066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.975 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:26:07 compute-0 nova_compute[261329]: 2025-10-10 10:26:07.975 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:26:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.156 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:26:08 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949448778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.627 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.632 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.648 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.651 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:26:08 compute-0 nova_compute[261329]: 2025-10-10 10:26:08.651 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:08 compute-0 sshd-session[295616]: Failed password for root from 80.94.93.176 port 22528 ssh2
Oct 10 10:26:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:08 compute-0 ceph-mon[73551]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:08 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1949448778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:09 compute-0 unix_chkpwd[295667]: password check failed for user (root)
Oct 10 10:26:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:26:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:26:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:11 compute-0 ceph-mon[73551]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:11 compute-0 nova_compute[261329]: 2025-10-10 10:26:11.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:11 compute-0 nova_compute[261329]: 2025-10-10 10:26:11.265 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:11 compute-0 nova_compute[261329]: 2025-10-10 10:26:11.266 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:26:11 compute-0 nova_compute[261329]: 2025-10-10 10:26:11.290 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:26:11 compute-0 sshd-session[295616]: Failed password for root from 80.94.93.176 port 22528 ssh2
Oct 10 10:26:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:12 compute-0 unix_chkpwd[295670]: password check failed for user (root)
Oct 10 10:26:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:12.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:12 compute-0 nova_compute[261329]: 2025-10-10 10:26:12.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:13 compute-0 ceph-mon[73551]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:13 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/943224587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:13 compute-0 nova_compute[261329]: 2025-10-10 10:26:13.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:13 compute-0 sshd-session[295616]: Failed password for root from 80.94.93.176 port 22528 ssh2
Oct 10 10:26:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:14 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2125983211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:26:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:26:14 compute-0 nova_compute[261329]: 2025-10-10 10:26:14.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:14 compute-0 sshd-session[295616]: Received disconnect from 80.94.93.176 port 22528:11:  [preauth]
Oct 10 10:26:14 compute-0 sshd-session[295616]: Disconnected from authenticating user root 80.94.93.176 port 22528 [preauth]
Oct 10 10:26:14 compute-0 sshd-session[295616]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:26:15 compute-0 ceph-mon[73551]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:15 compute-0 podman[295677]: 2025-10-10 10:26:15.212569027 +0000 UTC m=+0.053583660 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 10 10:26:15 compute-0 podman[295678]: 2025-10-10 10:26:15.228083272 +0000 UTC m=+0.064327442 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:26:15 compute-0 podman[295679]: 2025-10-10 10:26:15.235896381 +0000 UTC m=+0.073206716 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:26:15 compute-0 unix_chkpwd[295739]: password check failed for user (root)
Oct 10 10:26:15 compute-0 sshd-session[295675]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:26:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:26:16
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', '.nfs', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images']
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:26:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:26:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:26:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:26:17 compute-0 ceph-mon[73551]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:17.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:17 compute-0 nova_compute[261329]: 2025-10-10 10:26:17.262 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:17 compute-0 nova_compute[261329]: 2025-10-10 10:26:17.263 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:26:17 compute-0 nova_compute[261329]: 2025-10-10 10:26:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:26:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:26:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:18 compute-0 nova_compute[261329]: 2025-10-10 10:26:18.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:18 compute-0 sshd-session[295675]: Failed password for root from 80.94.93.176 port 24126 ssh2
Oct 10 10:26:18 compute-0 unix_chkpwd[295743]: password check failed for user (root)
Oct 10 10:26:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:18.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:18.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:19 compute-0 ceph-mon[73551]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:20.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:20 compute-0 sshd-session[295675]: Failed password for root from 80.94.93.176 port 24126 ssh2
Oct 10 10:26:21 compute-0 ceph-mon[73551]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:21 compute-0 unix_chkpwd[295747]: password check failed for user (root)
Oct 10 10:26:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:22 compute-0 sudo[295749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:22 compute-0 sudo[295749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:22 compute-0 sudo[295749]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:22 compute-0 nova_compute[261329]: 2025-10-10 10:26:22.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:23 compute-0 ceph-mon[73551]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:23 compute-0 nova_compute[261329]: 2025-10-10 10:26:23.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:23 compute-0 sshd-session[295675]: Failed password for root from 80.94.93.176 port 24126 ssh2
Oct 10 10:26:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:23.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:26:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:24.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:26:24 compute-0 sshd-session[295675]: Received disconnect from 80.94.93.176 port 24126:11:  [preauth]
Oct 10 10:26:24 compute-0 sshd-session[295675]: Disconnected from authenticating user root 80.94.93.176 port 24126 [preauth]
Oct 10 10:26:24 compute-0 sshd-session[295675]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 10 10:26:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.591363) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984591426, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1037, "num_deletes": 251, "total_data_size": 1737839, "memory_usage": 1762296, "flush_reason": "Manual Compaction"}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984603409, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1080048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34583, "largest_seqno": 35619, "table_properties": {"data_size": 1076005, "index_size": 1631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10781, "raw_average_key_size": 20, "raw_value_size": 1067267, "raw_average_value_size": 2068, "num_data_blocks": 70, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091895, "oldest_key_time": 1760091895, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 12078 microseconds, and 6373 cpu microseconds.
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603443) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1080048 bytes OK
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603462) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.605611) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.605623) EVENT_LOG_v1 {"time_micros": 1760091984605619, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.605638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1733113, prev total WAL file size 1733113, number of live WAL files 2.
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.606243) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1054KB)], [74(14MB)]
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984606343, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15938794, "oldest_snapshot_seqno": -1}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6532 keys, 12459809 bytes, temperature: kUnknown
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984688131, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12459809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12419262, "index_size": 23091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171675, "raw_average_key_size": 26, "raw_value_size": 12304655, "raw_average_value_size": 1883, "num_data_blocks": 907, "num_entries": 6532, "num_filter_entries": 6532, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.688505) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12459809 bytes
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.689815) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.7 rd, 152.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(26.3) write-amplify(11.5) OK, records in: 7011, records dropped: 479 output_compression: NoCompression
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.689844) EVENT_LOG_v1 {"time_micros": 1760091984689830, "job": 42, "event": "compaction_finished", "compaction_time_micros": 81878, "compaction_time_cpu_micros": 50706, "output_level": 6, "num_output_files": 1, "total_output_size": 12459809, "num_input_records": 7011, "num_output_records": 6532, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984690381, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984695428, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.606130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.695533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.695544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.695547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.695550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:26:24.695553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:25 compute-0 ceph-mon[73551]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:26:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:26:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:26:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:26:27 compute-0 ceph-mon[73551]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:26:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:26:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:27.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:27 compute-0 nova_compute[261329]: 2025-10-10 10:26:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:26:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:26:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:28 compute-0 nova_compute[261329]: 2025-10-10 10:26:28.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:26:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:28.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:29 compute-0 ceph-mon[73551]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:29 compute-0 systemd[1]: Starting dnf makecache...
Oct 10 10:26:29 compute-0 podman[295781]: 2025-10-10 10:26:29.222212534 +0000 UTC m=+0.062189664 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:26:29 compute-0 dnf[295782]: Metadata cache refreshed recently.
Oct 10 10:26:29 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 10 10:26:29 compute-0 systemd[1]: Finished dnf makecache.
Oct 10 10:26:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:31 compute-0 ceph-mon[73551]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:26:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:31.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:32.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:32 compute-0 nova_compute[261329]: 2025-10-10 10:26:32.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:33 compute-0 ceph-mon[73551]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:33 compute-0 nova_compute[261329]: 2025-10-10 10:26:33.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:35 compute-0 ceph-mon[73551]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:35.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:26:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:26:37 compute-0 ceph-mon[73551]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:37.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:37 compute-0 nova_compute[261329]: 2025-10-10 10:26:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:26:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:26:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:38 compute-0 nova_compute[261329]: 2025-10-10 10:26:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:38.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:39 compute-0 ceph-mon[73551]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:40.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:41 compute-0 ceph-mon[73551]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:26:41.914 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:26:41.915 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:26:41.915 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:42.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:42 compute-0 sudo[295815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:42 compute-0 sudo[295815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:42 compute-0 sudo[295815]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:42 compute-0 nova_compute[261329]: 2025-10-10 10:26:42.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:43 compute-0 nova_compute[261329]: 2025-10-10 10:26:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:43 compute-0 ceph-mon[73551]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:43.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:45 compute-0 ceph-mon[73551]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:45.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:46 compute-0 podman[295845]: 2025-10-10 10:26:46.220187612 +0000 UTC m=+0.065295952 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 10:26:46 compute-0 podman[295843]: 2025-10-10 10:26:46.252267166 +0000 UTC m=+0.095959611 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:26:46 compute-0 podman[295846]: 2025-10-10 10:26:46.276101645 +0000 UTC m=+0.110483973 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 10 10:26:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:26:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:26:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:26:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:47.249Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:47.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:47 compute-0 ceph-mon[73551]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:47 compute-0 nova_compute[261329]: 2025-10-10 10:26:47.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:48 compute-0 nova_compute[261329]: 2025-10-10 10:26:48.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:48.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:49 compute-0 nova_compute[261329]: 2025-10-10 10:26:49.092 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:49 compute-0 ceph-mon[73551]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:49.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:50 compute-0 sudo[295914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:26:50 compute-0 sudo[295914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:50 compute-0 sudo[295914]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:50 compute-0 sudo[295939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:26:50 compute-0 sudo[295939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:51 compute-0 sudo[295939]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:51 compute-0 ceph-mon[73551]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:26:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:26:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:26:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:51 compute-0 sudo[295995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:26:51 compute-0 sudo[295995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:51 compute-0 sudo[295995]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:51 compute-0 sudo[296020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:26:51 compute-0 sudo[296020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.01813175 +0000 UTC m=+0.054455047 container create 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:26:52 compute-0 systemd[1]: Started libpod-conmon-0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7.scope.
Oct 10 10:26:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:51.991599564 +0000 UTC m=+0.027922931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.096789938 +0000 UTC m=+0.133113255 container init 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.108248653 +0000 UTC m=+0.144571950 container start 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.111707654 +0000 UTC m=+0.148030951 container attach 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:26:52 compute-0 sleepy_mirzakhani[296103]: 167 167
Oct 10 10:26:52 compute-0 systemd[1]: libpod-0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7.scope: Deactivated successfully.
Oct 10 10:26:52 compute-0 conmon[296103]: conmon 0ac545e0c2fae2b9dffb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7.scope/container/memory.events
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.114692059 +0000 UTC m=+0.151015396 container died 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 10:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f392765cd6d2b70931063a527e9d44023256ce7c2119f45bed91a6d3e75931d9-merged.mount: Deactivated successfully.
Oct 10 10:26:52 compute-0 podman[296087]: 2025-10-10 10:26:52.154888441 +0000 UTC m=+0.191211728 container remove 0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:26:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:52 compute-0 systemd[1]: libpod-conmon-0ac545e0c2fae2b9dffbb66e63f6db26ffbd0c8bc530f807fd5fe1a4878f44c7.scope: Deactivated successfully.
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.30852955 +0000 UTC m=+0.044016035 container create 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:26:52 compute-0 systemd[1]: Started libpod-conmon-6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2.scope.
Oct 10 10:26:52 compute-0 nova_compute[261329]: 2025-10-10 10:26:52.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:26:52 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:52 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.290148763 +0000 UTC m=+0.025635228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.388577022 +0000 UTC m=+0.124063487 container init 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.40010674 +0000 UTC m=+0.135593185 container start 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.403535339 +0000 UTC m=+0.139021784 container attach 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:26:52 compute-0 adoring_jang[296143]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:26:52 compute-0 adoring_jang[296143]: --> All data devices are unavailable
Oct 10 10:26:52 compute-0 systemd[1]: libpod-6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2.scope: Deactivated successfully.
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.717899943 +0000 UTC m=+0.453386388 container died 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 10:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-890afe7851d52d1c5a806ea730bb9f0be2a1a9e81765341141c27a0900cb3d71-merged.mount: Deactivated successfully.
Oct 10 10:26:52 compute-0 podman[296127]: 2025-10-10 10:26:52.754719257 +0000 UTC m=+0.490205692 container remove 6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:26:52 compute-0 systemd[1]: libpod-conmon-6dbaa5948d0d36a74928c605abd73dde15de83183d51c0553d2344e0d5ff16f2.scope: Deactivated successfully.
Oct 10 10:26:52 compute-0 sudo[296020]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:52 compute-0 sudo[296170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:26:52 compute-0 sudo[296170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:52 compute-0 sudo[296170]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:52 compute-0 sudo[296195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:26:52 compute-0 sudo[296195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.25751054 +0000 UTC m=+0.039456569 container create d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 10 10:26:53 compute-0 nova_compute[261329]: 2025-10-10 10:26:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:53 compute-0 systemd[1]: Started libpod-conmon-d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7.scope.
Oct 10 10:26:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.325856279 +0000 UTC m=+0.107802338 container init d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.332745819 +0000 UTC m=+0.114691858 container start d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.239473934 +0000 UTC m=+0.021419983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.336177748 +0000 UTC m=+0.118123777 container attach d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:26:53 compute-0 relaxed_beaver[296279]: 167 167
Oct 10 10:26:53 compute-0 systemd[1]: libpod-d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7.scope: Deactivated successfully.
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.338215463 +0000 UTC m=+0.120161492 container died d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Oct 10 10:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b8d531ab44cbabd83970b3b3365f3ac1d1a4c05c1b4c0e2dcc2358a8b111c7-merged.mount: Deactivated successfully.
Oct 10 10:26:53 compute-0 podman[296263]: 2025-10-10 10:26:53.377295869 +0000 UTC m=+0.159241898 container remove d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:26:53 compute-0 systemd[1]: libpod-conmon-d36d0c9fa0b9752b9837fa2fd573cf76e06b4c8ff384502b6d4139958d038bf7.scope: Deactivated successfully.
Oct 10 10:26:53 compute-0 ceph-mon[73551]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.524519793 +0000 UTC m=+0.039999286 container create bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:26:53 compute-0 systemd[1]: Started libpod-conmon-bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1.scope.
Oct 10 10:26:53 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d42b714ee9c73b6282bcef4a52072c395d696d050e03ca73d0c511882345f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d42b714ee9c73b6282bcef4a52072c395d696d050e03ca73d0c511882345f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d42b714ee9c73b6282bcef4a52072c395d696d050e03ca73d0c511882345f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761d42b714ee9c73b6282bcef4a52072c395d696d050e03ca73d0c511882345f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.50749026 +0000 UTC m=+0.022969773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.605851147 +0000 UTC m=+0.121330660 container init bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.613863552 +0000 UTC m=+0.129343045 container start bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.616845927 +0000 UTC m=+0.132325420 container attach bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 10 10:26:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]: {
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:     "0": [
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:         {
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "devices": [
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "/dev/loop3"
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             ],
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "lv_name": "ceph_lv0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "lv_size": "21470642176",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "name": "ceph_lv0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "tags": {
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.cluster_name": "ceph",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.crush_device_class": "",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.encrypted": "0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.osd_id": "0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.type": "block",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.vdo": "0",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:                 "ceph.with_tpm": "0"
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             },
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "type": "block",
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:             "vg_name": "ceph_vg0"
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:         }
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]:     ]
Oct 10 10:26:53 compute-0 frosty_hodgkin[296319]: }
Oct 10 10:26:53 compute-0 systemd[1]: libpod-bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1.scope: Deactivated successfully.
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.890736171 +0000 UTC m=+0.406215654 container died bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-761d42b714ee9c73b6282bcef4a52072c395d696d050e03ca73d0c511882345f-merged.mount: Deactivated successfully.
Oct 10 10:26:53 compute-0 podman[296303]: 2025-10-10 10:26:53.940075734 +0000 UTC m=+0.455555257 container remove bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 10:26:53 compute-0 systemd[1]: libpod-conmon-bbd4641ee0c0296439604a17f499d1acc49b5f477bd9cc8c520ca3fd30b304d1.scope: Deactivated successfully.
Oct 10 10:26:54 compute-0 sudo[296195]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:54 compute-0 sudo[296341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:26:54 compute-0 sudo[296341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:54 compute-0 sudo[296341]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:54.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:54 compute-0 sudo[296366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:26:54 compute-0 sudo[296366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.671257639 +0000 UTC m=+0.040628817 container create 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:26:54 compute-0 systemd[1]: Started libpod-conmon-2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee.scope.
Oct 10 10:26:54 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.652241932 +0000 UTC m=+0.021613140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.751784277 +0000 UTC m=+0.121155485 container init 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.758265624 +0000 UTC m=+0.127636802 container start 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.762459927 +0000 UTC m=+0.131831105 container attach 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:26:54 compute-0 interesting_montalcini[296449]: 167 167
Oct 10 10:26:54 compute-0 systemd[1]: libpod-2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee.scope: Deactivated successfully.
Oct 10 10:26:54 compute-0 conmon[296449]: conmon 2cfef0858ba424d01653 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee.scope/container/memory.events
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.764473811 +0000 UTC m=+0.133844989 container died 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f031e1f15f3f2b842415cb78d74410e72ef97711b19d3e4415086934273c5f-merged.mount: Deactivated successfully.
Oct 10 10:26:54 compute-0 podman[296433]: 2025-10-10 10:26:54.80552165 +0000 UTC m=+0.174892838 container remove 2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:26:54 compute-0 systemd[1]: libpod-conmon-2cfef0858ba424d01653a713632a4c2e43a24b4ea25a610d84e835f1c5d890ee.scope: Deactivated successfully.
Oct 10 10:26:54 compute-0 podman[296472]: 2025-10-10 10:26:54.977092501 +0000 UTC m=+0.051777392 container create be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:26:55 compute-0 systemd[1]: Started libpod-conmon-be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8.scope.
Oct 10 10:26:55 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:26:55 compute-0 podman[296472]: 2025-10-10 10:26:54.956401882 +0000 UTC m=+0.031086803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540fd51cf19309ca9b616d9e5c31c87e8a8fad7378ce09cea6ca4527ae199774/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540fd51cf19309ca9b616d9e5c31c87e8a8fad7378ce09cea6ca4527ae199774/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540fd51cf19309ca9b616d9e5c31c87e8a8fad7378ce09cea6ca4527ae199774/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540fd51cf19309ca9b616d9e5c31c87e8a8fad7378ce09cea6ca4527ae199774/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:26:55 compute-0 podman[296472]: 2025-10-10 10:26:55.068781695 +0000 UTC m=+0.143466656 container init be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 10:26:55 compute-0 podman[296472]: 2025-10-10 10:26:55.081862682 +0000 UTC m=+0.156547573 container start be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:26:55 compute-0 podman[296472]: 2025-10-10 10:26:55.084981412 +0000 UTC m=+0.159666413 container attach be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:26:55 compute-0 ceph-mon[73551]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:55 compute-0 lvm[296562]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:26:55 compute-0 lvm[296562]: VG ceph_vg0 finished
Oct 10 10:26:55 compute-0 ecstatic_euclid[296488]: {}
Oct 10 10:26:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:55 compute-0 systemd[1]: libpod-be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8.scope: Deactivated successfully.
Oct 10 10:26:55 compute-0 systemd[1]: libpod-be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8.scope: Consumed 1.253s CPU time.
Oct 10 10:26:55 compute-0 conmon[296488]: conmon be1c821cdafacf1c1364 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8.scope/container/memory.events
Oct 10 10:26:55 compute-0 podman[296566]: 2025-10-10 10:26:55.932849407 +0000 UTC m=+0.038989984 container died be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-540fd51cf19309ca9b616d9e5c31c87e8a8fad7378ce09cea6ca4527ae199774-merged.mount: Deactivated successfully.
Oct 10 10:26:55 compute-0 podman[296566]: 2025-10-10 10:26:55.971559201 +0000 UTC m=+0.077699738 container remove be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:26:55 compute-0 systemd[1]: libpod-conmon-be1c821cdafacf1c13643d6d84821b835c8c9aafbed868dc2c65a3d0af5e7ac8.scope: Deactivated successfully.
Oct 10 10:26:56 compute-0 sudo[296366]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:26:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:26:56 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:56 compute-0 sudo[296580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:26:56 compute-0 sudo[296580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:56 compute-0 sudo[296580]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:57 compute-0 ceph-mon[73551]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:57.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:26:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:57.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:26:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:57.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:26:57 compute-0 nova_compute[261329]: 2025-10-10 10:26:57.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:26:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:26:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:57.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:26:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:58.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:26:58 compute-0 nova_compute[261329]: 2025-10-10 10:26:58.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:26:58.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:26:59 compute-0 ceph-mon[73551]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:26:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:59.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:00 compute-0 podman[296609]: 2025-10-10 10:27:00.210984441 +0000 UTC m=+0.053282830 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 10 10:27:01 compute-0 ceph-mon[73551]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:01 compute-0 nova_compute[261329]: 2025-10-10 10:27:01.261 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:01 compute-0 nova_compute[261329]: 2025-10-10 10:27:01.262 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:27:01 compute-0 nova_compute[261329]: 2025-10-10 10:27:01.262 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:27:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:27:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:01 compute-0 nova_compute[261329]: 2025-10-10 10:27:01.597 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:27:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:27:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:27:02 compute-0 nova_compute[261329]: 2025-10-10 10:27:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:02 compute-0 sudo[296631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:02 compute-0 sudo[296631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:02 compute-0 sudo[296631]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:03 compute-0 ceph-mon[73551]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:03 compute-0 nova_compute[261329]: 2025-10-10 10:27:03.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:03 compute-0 nova_compute[261329]: 2025-10-10 10:27:03.568 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:27:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:04.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:27:04 compute-0 nova_compute[261329]: 2025-10-10 10:27:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 radosgw[95218]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 10 10:27:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:05 compute-0 ceph-mon[73551]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:05 compute-0 nova_compute[261329]: 2025-10-10 10:27:05.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:05 compute-0 nova_compute[261329]: 2025-10-10 10:27:05.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:05 compute-0 nova_compute[261329]: 2025-10-10 10:27:05.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/821806748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:06 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2564603434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:07 compute-0 ceph-mon[73551]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:07 compute-0 nova_compute[261329]: 2025-10-10 10:27:07.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:07 compute-0 nova_compute[261329]: 2025-10-10 10:27:07.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:07 compute-0 nova_compute[261329]: 2025-10-10 10:27:07.239 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:27:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:07.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:27:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:27:07 compute-0 nova_compute[261329]: 2025-10-10 10:27:07.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.278 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.279 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.279 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.279 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.280 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:27:08 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249927567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.770 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:08.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:27:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:08.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.925 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.926 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4462MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.927 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:08 compute-0 nova_compute[261329]: 2025-10-10 10:27:08.927 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.091 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.091 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.115 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing inventories for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:27:09 compute-0 ceph-mon[73551]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3249927567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.215 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating ProviderTree inventory for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.216 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Updating inventory in ProviderTree for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.236 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing aggregate associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.277 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Refreshing trait associations for resource provider 5b1ab6df-62aa-4a93-8e24-04440191f108, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_CLMUL,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.311 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.763 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.772 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.792 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.795 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:27:09 compute-0 nova_compute[261329]: 2025-10-10 10:27:09.796 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:10 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3915087126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:11 compute-0 ceph-mon[73551]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:12 compute-0 nova_compute[261329]: 2025-10-10 10:27:12.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:13 compute-0 ceph-mon[73551]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:13 compute-0 nova_compute[261329]: 2025-10-10 10:27:13.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:15 compute-0 ceph-mon[73551]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4181353496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/106958647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:27:16
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.nfs', 'backups', 'volumes', '.mgr']
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:27:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:27:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:27:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:27:17 compute-0 podman[296715]: 2025-10-10 10:27:17.228188261 +0000 UTC m=+0.061319926 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 10 10:27:17 compute-0 ceph-mon[73551]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:17.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:17 compute-0 podman[296716]: 2025-10-10 10:27:17.259625304 +0000 UTC m=+0.093520422 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:27:17 compute-0 podman[296717]: 2025-10-10 10:27:17.263895591 +0000 UTC m=+0.095317851 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:27:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:27:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:27:17 compute-0 nova_compute[261329]: 2025-10-10 10:27:17.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:18.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:18 compute-0 nova_compute[261329]: 2025-10-10 10:27:18.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:18.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:19 compute-0 ceph-mon[73551]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:19.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:21 compute-0 ceph-mon[73551]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:21.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:22.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:22 compute-0 nova_compute[261329]: 2025-10-10 10:27:22.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:22 compute-0 sudo[296784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:22 compute-0 sudo[296784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:22 compute-0 sudo[296784]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:23 compute-0 nova_compute[261329]: 2025-10-10 10:27:23.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:23 compute-0 ceph-mon[73551]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:25 compute-0 ceph-mon[73551]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:25.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:27.256Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:27:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:27.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:27 compute-0 ceph-mon[73551]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/4007566910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:27:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/4007566910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:27:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:27:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:27:27 compute-0 nova_compute[261329]: 2025-10-10 10:27:27.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:27.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:28 compute-0 nova_compute[261329]: 2025-10-10 10:27:28.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:28.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:29 compute-0 ceph-mon[73551]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:31 compute-0 podman[296818]: 2025-10-10 10:27:31.210291685 +0000 UTC m=+0.058675912 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 10 10:27:31 compute-0 nova_compute[261329]: 2025-10-10 10:27:31.220 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:31 compute-0 nova_compute[261329]: 2025-10-10 10:27:31.246 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:31 compute-0 ceph-mon[73551]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:27:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:27:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:27:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:32.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:32 compute-0 nova_compute[261329]: 2025-10-10 10:27:32.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:33 compute-0 nova_compute[261329]: 2025-10-10 10:27:33.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:33 compute-0 ceph-mon[73551]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:33.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:27:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:27:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:35 compute-0 ceph-mon[73551]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:27:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:36.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:27:36 compute-0 nova_compute[261329]: 2025-10-10 10:27:36.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:36 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:36.327 162925 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:27:36 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:36.329 162925 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:27:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:37.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:37 compute-0 nova_compute[261329]: 2025-10-10 10:27:37.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:37 compute-0 ceph-mon[73551]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:37.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:38 compute-0 nova_compute[261329]: 2025-10-10 10:27:38.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:38.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:39 compute-0 ceph-mon[73551]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:40.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:40 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:40.331 162925 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a1a60c06-0b75-41d0-88d4-dc571cb95004, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:27:41 compute-0 ceph-mon[73551]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:41.915 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:41.915 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:27:41.915 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:42.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:42 compute-0 nova_compute[261329]: 2025-10-10 10:27:42.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:42 compute-0 sudo[296849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:42 compute-0 sudo[296849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:42 compute-0 sudo[296849]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:43 compute-0 nova_compute[261329]: 2025-10-10 10:27:43.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:43 compute-0 ceph-mon[73551]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:44.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:45 compute-0 ceph-mon[73551]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:27:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:27:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:27:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:47.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:47 compute-0 nova_compute[261329]: 2025-10-10 10:27:47.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:47 compute-0 ceph-mon[73551]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:47.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:48 compute-0 podman[296881]: 2025-10-10 10:27:48.217837699 +0000 UTC m=+0.062024639 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 10 10:27:48 compute-0 podman[296879]: 2025-10-10 10:27:48.241230205 +0000 UTC m=+0.083537386 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 10 10:27:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:48 compute-0 podman[296882]: 2025-10-10 10:27:48.27308806 +0000 UTC m=+0.108056836 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:27:48 compute-0 nova_compute[261329]: 2025-10-10 10:27:48.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:48 compute-0 ceph-mon[73551]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:48.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:50 compute-0 ceph-mon[73551]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:52 compute-0 nova_compute[261329]: 2025-10-10 10:27:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:52 compute-0 ceph-mon[73551]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:53 compute-0 nova_compute[261329]: 2025-10-10 10:27:53.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:27:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:53.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:27:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 10:27:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 10:27:54 compute-0 ceph-mon[73551]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:56 compute-0 sudo[296948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:27:56 compute-0 sudo[296948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:56 compute-0 sudo[296948]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:56 compute-0 sudo[296973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:27:56 compute-0 sudo[296973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:56 compute-0 ceph-mon[73551]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:57 compute-0 sudo[296973]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:27:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:57.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:57 compute-0 sudo[297030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:27:57 compute-0 sudo[297030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:57 compute-0 sudo[297030]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:57 compute-0 sudo[297055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:27:57 compute-0 sudo[297055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:27:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 10 10:27:57 compute-0 nova_compute[261329]: 2025-10-10 10:27:57.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:27:57 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.696805302 +0000 UTC m=+0.039832871 container create 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:27:57 compute-0 systemd[1]: Started libpod-conmon-37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009.scope.
Oct 10 10:27:57 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.770506663 +0000 UTC m=+0.113534292 container init 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.68011745 +0000 UTC m=+0.023145039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.777364171 +0000 UTC m=+0.120391760 container start 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.781369609 +0000 UTC m=+0.124397208 container attach 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 10:27:57 compute-0 cool_benz[297138]: 167 167
Oct 10 10:27:57 compute-0 systemd[1]: libpod-37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009.scope: Deactivated successfully.
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.78674189 +0000 UTC m=+0.129769479 container died 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b30e3565a2e471821b055a868bf1984aeb58acc3a30084affe84e8539dbdb803-merged.mount: Deactivated successfully.
Oct 10 10:27:57 compute-0 podman[297121]: 2025-10-10 10:27:57.832418987 +0000 UTC m=+0.175446556 container remove 37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_benz, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:27:57 compute-0 systemd[1]: libpod-conmon-37d9f7ec2f4d13b95deed5648554defa02085435d7d11c97e289ee48c7fe6009.scope: Deactivated successfully.
Oct 10 10:27:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:57.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.007033744 +0000 UTC m=+0.040144390 container create 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:27:58 compute-0 systemd[1]: Started libpod-conmon-5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033.scope.
Oct 10 10:27:58 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:57.988603707 +0000 UTC m=+0.021714383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.088469971 +0000 UTC m=+0.121580647 container init 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.095171765 +0000 UTC m=+0.128282401 container start 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.0984642 +0000 UTC m=+0.131574876 container attach 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:27:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:58 compute-0 nova_compute[261329]: 2025-10-10 10:27:58.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:58 compute-0 intelligent_chatterjee[297179]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:27:58 compute-0 intelligent_chatterjee[297179]: --> All data devices are unavailable
Oct 10 10:27:58 compute-0 systemd[1]: libpod-5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033.scope: Deactivated successfully.
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.441287271 +0000 UTC m=+0.474397977 container died 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca02ef9ba29e0faffcaaba85fa72dc1dc34c61d33bb647c668fd0651052194f-merged.mount: Deactivated successfully.
Oct 10 10:27:58 compute-0 podman[297163]: 2025-10-10 10:27:58.495269983 +0000 UTC m=+0.528380629 container remove 5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:27:58 compute-0 systemd[1]: libpod-conmon-5b96ff6f47da4862c799618bdbf073e3ec7314a24479efa3c3ec9f139f687033.scope: Deactivated successfully.
Oct 10 10:27:58 compute-0 sudo[297055]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:58 compute-0 sudo[297208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:27:58 compute-0 sudo[297208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:58 compute-0 sudo[297208]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:58 compute-0 sudo[297233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:27:58 compute-0 sudo[297233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:58.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:27:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:27:58.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.135498747 +0000 UTC m=+0.048839458 container create 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 10:27:59 compute-0 systemd[1]: Started libpod-conmon-5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf.scope.
Oct 10 10:27:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.213764692 +0000 UTC m=+0.127105443 container init 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.121124488 +0000 UTC m=+0.034465219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.226818889 +0000 UTC m=+0.140159640 container start 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.230433454 +0000 UTC m=+0.143774185 container attach 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:27:59 compute-0 busy_lamport[297315]: 167 167
Oct 10 10:27:59 compute-0 systemd[1]: libpod-5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf.scope: Deactivated successfully.
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.232450299 +0000 UTC m=+0.145791040 container died 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-287616b7a8544e97166e507eb177dffc4c9403969a4bf482b51339fba85f107a-merged.mount: Deactivated successfully.
Oct 10 10:27:59 compute-0 podman[297299]: 2025-10-10 10:27:59.275207571 +0000 UTC m=+0.188548322 container remove 5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 10:27:59 compute-0 systemd[1]: libpod-conmon-5769e87b6faad262e92cca033150c756e755a7f2310612f0c8ac283fe42609cf.scope: Deactivated successfully.
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.456880284 +0000 UTC m=+0.049130647 container create f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:27:59 compute-0 systemd[1]: Started libpod-conmon-f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a.scope.
Oct 10 10:27:59 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4025fcf64e915a4dfd5fca9b1d9baa66191a0e49e7a76db77828621147314044/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4025fcf64e915a4dfd5fca9b1d9baa66191a0e49e7a76db77828621147314044/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4025fcf64e915a4dfd5fca9b1d9baa66191a0e49e7a76db77828621147314044/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4025fcf64e915a4dfd5fca9b1d9baa66191a0e49e7a76db77828621147314044/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.436612239 +0000 UTC m=+0.028862642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.543509817 +0000 UTC m=+0.135760170 container init f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.550908463 +0000 UTC m=+0.143158856 container start f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.554475927 +0000 UTC m=+0.146726340 container attach f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 10 10:27:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]: {
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:     "0": [
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:         {
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "devices": [
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "/dev/loop3"
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             ],
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "lv_name": "ceph_lv0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "lv_size": "21470642176",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "name": "ceph_lv0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "tags": {
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.cluster_name": "ceph",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.crush_device_class": "",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.encrypted": "0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.osd_id": "0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.type": "block",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.vdo": "0",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:                 "ceph.with_tpm": "0"
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             },
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "type": "block",
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:             "vg_name": "ceph_vg0"
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:         }
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]:     ]
Oct 10 10:27:59 compute-0 nostalgic_montalcini[297354]: }
Oct 10 10:27:59 compute-0 systemd[1]: libpod-f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a.scope: Deactivated successfully.
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.897035239 +0000 UTC m=+0.489285592 container died f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 10:27:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:27:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:59.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4025fcf64e915a4dfd5fca9b1d9baa66191a0e49e7a76db77828621147314044-merged.mount: Deactivated successfully.
Oct 10 10:27:59 compute-0 podman[297338]: 2025-10-10 10:27:59.949886324 +0000 UTC m=+0.542136667 container remove f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 10:27:59 compute-0 systemd[1]: libpod-conmon-f15fed3229cef9228c71aa7a2b96d92c02697f562a872772d7ec9cd74982521a.scope: Deactivated successfully.
Oct 10 10:28:00 compute-0 sudo[297233]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:00 compute-0 sudo[297378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:28:00 compute-0 sudo[297378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:00 compute-0 sudo[297378]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:00 compute-0 sudo[297403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:28:00 compute-0 sudo[297403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:00 compute-0 ceph-mon[73551]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.695569082 +0000 UTC m=+0.064581560 container create d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 10:28:00 compute-0 systemd[1]: Started libpod-conmon-d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8.scope.
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.672281289 +0000 UTC m=+0.041293847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:28:00 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.793722782 +0000 UTC m=+0.162735270 container init d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.802403419 +0000 UTC m=+0.171415927 container start d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.80652439 +0000 UTC m=+0.175536958 container attach d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:28:00 compute-0 systemd[1]: libpod-d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8.scope: Deactivated successfully.
Oct 10 10:28:00 compute-0 peaceful_brown[297485]: 167 167
Oct 10 10:28:00 compute-0 conmon[297485]: conmon d2d79524e1b877ae9059 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8.scope/container/memory.events
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.809854506 +0000 UTC m=+0.178867014 container died d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e33d1a919f656b0e5aa8d85f8921b5b2039188bb17b1a2dc2313cb4308930838-merged.mount: Deactivated successfully.
Oct 10 10:28:00 compute-0 podman[297469]: 2025-10-10 10:28:00.867282098 +0000 UTC m=+0.236294616 container remove d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 10:28:00 compute-0 systemd[1]: libpod-conmon-d2d79524e1b877ae905916538fdde9f1491fcf57037771d244cc5d04f2ffe4c8.scope: Deactivated successfully.
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.088865414 +0000 UTC m=+0.055916665 container create f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:28:01 compute-0 systemd[1]: Started libpod-conmon-f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552.scope.
Oct 10 10:28:01 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8006eeece26bf62f0bac9b474b9f07861fe5674792f0ec87dc67341aeccd930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8006eeece26bf62f0bac9b474b9f07861fe5674792f0ec87dc67341aeccd930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8006eeece26bf62f0bac9b474b9f07861fe5674792f0ec87dc67341aeccd930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8006eeece26bf62f0bac9b474b9f07861fe5674792f0ec87dc67341aeccd930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.162940965 +0000 UTC m=+0.129992216 container init f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.167999336 +0000 UTC m=+0.135050567 container start f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.073578676 +0000 UTC m=+0.040629927 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.170853227 +0000 UTC m=+0.137904518 container attach f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:28:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:28:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:01 compute-0 lvm[297607]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:28:01 compute-0 lvm[297607]: VG ceph_vg0 finished
Oct 10 10:28:01 compute-0 cranky_swartz[297527]: {}
Oct 10 10:28:01 compute-0 systemd[1]: libpod-f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552.scope: Deactivated successfully.
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.882738287 +0000 UTC m=+0.849789528 container died f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 10:28:01 compute-0 systemd[1]: libpod-f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552.scope: Consumed 1.187s CPU time.
Oct 10 10:28:01 compute-0 podman[297600]: 2025-10-10 10:28:01.89318148 +0000 UTC m=+0.097101387 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 10 10:28:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8006eeece26bf62f0bac9b474b9f07861fe5674792f0ec87dc67341aeccd930-merged.mount: Deactivated successfully.
Oct 10 10:28:01 compute-0 podman[297511]: 2025-10-10 10:28:01.926286605 +0000 UTC m=+0.893337836 container remove f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 10:28:01 compute-0 systemd[1]: libpod-conmon-f1323e1c9f2ace290ae40a4234360b69c64d73e08a77e87db4ae8f48bf702552.scope: Deactivated successfully.
Oct 10 10:28:01 compute-0 sudo[297403]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:28:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:28:01 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:02 compute-0 sudo[297636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:28:02 compute-0 sudo[297636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:02 compute-0 sudo[297636]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:02 compute-0 ceph-mon[73551]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:02 compute-0 nova_compute[261329]: 2025-10-10 10:28:02.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:02 compute-0 sudo[297662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:02 compute-0 sudo[297662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:02 compute-0 sudo[297662]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:03 compute-0 nova_compute[261329]: 2025-10-10 10:28:03.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:03.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.002000064s ======
Oct 10 10:28:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:04.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Oct 10 10:28:04 compute-0 ceph-mon[73551]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.797 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.797 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.798 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.798 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.839 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:28:04 compute-0 nova_compute[261329]: 2025-10-10 10:28:04.840 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:06 compute-0 ceph-mon[73551]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:07 compute-0 nova_compute[261329]: 2025-10-10 10:28:07.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:07 compute-0 nova_compute[261329]: 2025-10-10 10:28:07.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:07 compute-0 nova_compute[261329]: 2025-10-10 10:28:07.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:07.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:28:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:07.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2704639440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:07 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1632890228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:07 compute-0 nova_compute[261329]: 2025-10-10 10:28:07.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:08 compute-0 nova_compute[261329]: 2025-10-10 10:28:08.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:08 compute-0 nova_compute[261329]: 2025-10-10 10:28:08.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:28:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:08.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:08 compute-0 nova_compute[261329]: 2025-10-10 10:28:08.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:08 compute-0 ceph-mon[73551]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:08.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:28:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:08.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:08.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:09 compute-0 nova_compute[261329]: 2025-10-10 10:28:09.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:09.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.264 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.265 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.265 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.265 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.265 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:28:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:28:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:28:10 compute-0 ceph-mon[73551]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:28:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577030174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.748 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.898 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.900 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.900 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.900 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.963 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.964 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:28:10 compute-0 nova_compute[261329]: 2025-10-10 10:28:10.985 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:28:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/577030174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:28:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943645577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:11 compute-0 nova_compute[261329]: 2025-10-10 10:28:11.448 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:28:11 compute-0 nova_compute[261329]: 2025-10-10 10:28:11.453 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:28:11 compute-0 nova_compute[261329]: 2025-10-10 10:28:11.473 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:28:11 compute-0 nova_compute[261329]: 2025-10-10 10:28:11.475 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:28:11 compute-0 nova_compute[261329]: 2025-10-10 10:28:11.475 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:12 compute-0 ceph-mon[73551]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2943645577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:12 compute-0 nova_compute[261329]: 2025-10-10 10:28:12.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:13 compute-0 nova_compute[261329]: 2025-10-10 10:28:13.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:14 compute-0 ceph-mon[73551]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:15 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3528792367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:15 compute-0 nova_compute[261329]: 2025-10-10 10:28:15.471 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:15.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:28:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:28:16
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'vms', '.mgr']
Oct 10 10:28:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:28:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:28:16 compute-0 ceph-mon[73551]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:16 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1813754537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:28:16 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:17.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:28:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:17.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:17.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:17 compute-0 nova_compute[261329]: 2025-10-10 10:28:17.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:17.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:18 compute-0 nova_compute[261329]: 2025-10-10 10:28:18.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:18 compute-0 ceph-mgr[73845]: [devicehealth INFO root] Check health
Oct 10 10:28:18 compute-0 ceph-mon[73551]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:18.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:28:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:18.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:18.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:19 compute-0 podman[297748]: 2025-10-10 10:28:19.209050123 +0000 UTC m=+0.057473133 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:28:19 compute-0 podman[297749]: 2025-10-10 10:28:19.212060819 +0000 UTC m=+0.058229448 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:28:19 compute-0 podman[297750]: 2025-10-10 10:28:19.249495743 +0000 UTC m=+0.087325966 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 10 10:28:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:28:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:19.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:28:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:20.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:20 compute-0 ceph-mon[73551]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:21.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:22 compute-0 nova_compute[261329]: 2025-10-10 10:28:22.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:22 compute-0 ceph-mon[73551]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:22 compute-0 sudo[297817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:22 compute-0 sudo[297817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:22 compute-0 sudo[297817]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:23 compute-0 nova_compute[261329]: 2025-10-10 10:28:23.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:24 compute-0 ceph-mon[73551]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:25.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:28:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:28:26 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:28:26 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:28:26 compute-0 ceph-mon[73551]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:28:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:28:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:27.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:27 compute-0 nova_compute[261329]: 2025-10-10 10:28:27.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.514495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107514528, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1316, "num_deletes": 251, "total_data_size": 2355778, "memory_usage": 2398192, "flush_reason": "Manual Compaction"}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107525745, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2302543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35620, "largest_seqno": 36935, "table_properties": {"data_size": 2296439, "index_size": 3367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13187, "raw_average_key_size": 20, "raw_value_size": 2284080, "raw_average_value_size": 3465, "num_data_blocks": 147, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091985, "oldest_key_time": 1760091985, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 11289 microseconds, and 5162 cpu microseconds.
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.525782) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2302543 bytes OK
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.525800) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.531733) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.531772) EVENT_LOG_v1 {"time_micros": 1760092107531765, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.531795) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2350059, prev total WAL file size 2350059, number of live WAL files 2.
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.532668) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2248KB)], [77(11MB)]
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107532703, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14762352, "oldest_snapshot_seqno": -1}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6675 keys, 12616277 bytes, temperature: kUnknown
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107583875, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12616277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12574655, "index_size": 23846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175350, "raw_average_key_size": 26, "raw_value_size": 12457284, "raw_average_value_size": 1866, "num_data_blocks": 936, "num_entries": 6675, "num_filter_entries": 6675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.584225) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12616277 bytes
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.586717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.9 rd, 246.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.9) write-amplify(5.5) OK, records in: 7191, records dropped: 516 output_compression: NoCompression
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.586734) EVENT_LOG_v1 {"time_micros": 1760092107586726, "job": 44, "event": "compaction_finished", "compaction_time_micros": 51268, "compaction_time_cpu_micros": 28720, "output_level": 6, "num_output_files": 1, "total_output_size": 12616277, "num_input_records": 7191, "num_output_records": 6675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107587143, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107589214, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.532567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.589312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.589318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.589346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.589349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:28:27.589351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:28 compute-0 nova_compute[261329]: 2025-10-10 10:28:28.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:28 compute-0 ceph-mon[73551]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:28.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:28.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:28:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:28.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:29 compute-0 ceph-mon[73551]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:29 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:29 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:29 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:29.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:28:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:31 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:31 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:31 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:31.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:32 compute-0 podman[297852]: 2025-10-10 10:28:32.228969693 +0000 UTC m=+0.071310275 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 10 10:28:32 compute-0 ceph-mon[73551]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:32 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:32 compute-0 nova_compute[261329]: 2025-10-10 10:28:32.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:33 compute-0 nova_compute[261329]: 2025-10-10 10:28:33.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:33 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:33 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:33 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:33.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:34 compute-0 ceph-mon[73551]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:34.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=cleanup t=2025-10-10T10:28:34.719657203Z level=info msg="Completed cleanup jobs" duration=8.857823ms
Oct 10 10:28:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=plugins.update.checker t=2025-10-10T10:28:34.853903023Z level=info msg="Update check succeeded" duration=52.932758ms
Oct 10 10:28:34 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-grafana-compute-0[103575]: logger=grafana.update.checker t=2025-10-10T10:28:34.956199445Z level=info msg="Update check succeeded" duration=120.233754ms
Oct 10 10:28:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:35 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:35 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:35 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:36 compute-0 ceph-mon[73551]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:36.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:37.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:37 compute-0 nova_compute[261329]: 2025-10-10 10:28:37.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:37 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:37 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:37 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:37.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:38 compute-0 ceph-mon[73551]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:38 compute-0 nova_compute[261329]: 2025-10-10 10:28:38.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:39 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:39 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:39 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:39.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:40 compute-0 ceph-mon[73551]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:28:41.916 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:28:41.917 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:28:41.917 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:41 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:41 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:41 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:42 compute-0 ceph-mon[73551]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:42 compute-0 nova_compute[261329]: 2025-10-10 10:28:42.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:42 compute-0 sudo[297883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:42 compute-0 sudo[297883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:42 compute-0 sudo[297883]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:43 compute-0 nova_compute[261329]: 2025-10-10 10:28:43.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:43 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:43 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:43 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:43.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:44 compute-0 ceph-mon[73551]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:45 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:45 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:45 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:28:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:46 compute-0 ceph-mon[73551]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:28:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:28:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:47.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:47 compute-0 nova_compute[261329]: 2025-10-10 10:28:47.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:47 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:47 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:47 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:47.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:48 compute-0 nova_compute[261329]: 2025-10-10 10:28:48.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:48 compute-0 ceph-mon[73551]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:48.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:28:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:49 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:49 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:49 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:50 compute-0 podman[297914]: 2025-10-10 10:28:50.227167473 +0000 UTC m=+0.073528154 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true)
Oct 10 10:28:50 compute-0 podman[297915]: 2025-10-10 10:28:50.227277257 +0000 UTC m=+0.068437693 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:28:50 compute-0 podman[297917]: 2025-10-10 10:28:50.254503446 +0000 UTC m=+0.096874311 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:28:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:50.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:50 compute-0 ceph-mon[73551]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:51 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:51 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:51 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:52 compute-0 ceph-mon[73551]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:52 compute-0 nova_compute[261329]: 2025-10-10 10:28:52.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:53 compute-0 nova_compute[261329]: 2025-10-10 10:28:53.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:53 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:53 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:53 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:28:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:28:54 compute-0 ceph-mon[73551]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:55 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:55 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:55 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:28:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:28:56 compute-0 ceph-mon[73551]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:57.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:28:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:28:57 compute-0 nova_compute[261329]: 2025-10-10 10:28:57.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:57 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:57 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:57 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:58 compute-0 nova_compute[261329]: 2025-10-10 10:28:58.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:58 compute-0 ceph-mon[73551]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:28:58.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:28:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:59 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:28:59 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:59 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:00 compute-0 ceph-mon[73551]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:29:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:01 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:01 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:01 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:02 compute-0 sudo[297992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:29:02 compute-0 sudo[297992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:02 compute-0 sudo[297992]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:02 compute-0 podman[298016]: 2025-10-10 10:29:02.406639312 +0000 UTC m=+0.055519809 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:29:02 compute-0 sudo[298023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:29:02 compute-0 sudo[298023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:02 compute-0 nova_compute[261329]: 2025-10-10 10:29:02.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:02 compute-0 ceph-mon[73551]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:02 compute-0 sudo[298023]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:29:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-0 sudo[298095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:03 compute-0 sudo[298095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:03 compute-0 sudo[298095]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:03 compute-0 nova_compute[261329]: 2025-10-10 10:29:03.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:03 compute-0 nova_compute[261329]: 2025-10-10 10:29:03.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:29:03 compute-0 nova_compute[261329]: 2025-10-10 10:29:03.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:29:03 compute-0 sudo[298118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:29:03 compute-0 sudo[298118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:03 compute-0 sudo[298118]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:03 compute-0 nova_compute[261329]: 2025-10-10 10:29:03.259 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:29:03 compute-0 sudo[298145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:29:03 compute-0 sudo[298145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:03 compute-0 nova_compute[261329]: 2025-10-10 10:29:03.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:29:03 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.738737562 +0000 UTC m=+0.049934771 container create b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 10:29:03 compute-0 systemd[1]: Started libpod-conmon-b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540.scope.
Oct 10 10:29:03 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.709393602 +0000 UTC m=+0.020590851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.805555802 +0000 UTC m=+0.116753111 container init b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.811241334 +0000 UTC m=+0.122438543 container start b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.814440797 +0000 UTC m=+0.125638086 container attach b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 10 10:29:03 compute-0 magical_mestorf[298226]: 167 167
Oct 10 10:29:03 compute-0 systemd[1]: libpod-b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540.scope: Deactivated successfully.
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.820391018 +0000 UTC m=+0.131588247 container died b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 10:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f152c07f9640afbb3d3d6b23132bdf78d2cb317edb4cac2eab6f1b1c2861b78c-merged.mount: Deactivated successfully.
Oct 10 10:29:03 compute-0 podman[298210]: 2025-10-10 10:29:03.853418226 +0000 UTC m=+0.164615435 container remove b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:29:03 compute-0 systemd[1]: libpod-conmon-b54e2e8c5cafb0dd2b622d11f2d4ced12db38b4acc1de78c4641cdc720f10540.scope: Deactivated successfully.
Oct 10 10:29:03 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:03 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:03 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.081493134 +0000 UTC m=+0.067926638 container create 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:29:04 compute-0 systemd[1]: Started libpod-conmon-01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07.scope.
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.053831758 +0000 UTC m=+0.040265352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:04 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.190968781 +0000 UTC m=+0.177402295 container init 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.206423366 +0000 UTC m=+0.192856870 container start 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.210448305 +0000 UTC m=+0.196881809 container attach 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 10:29:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:04.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:04 compute-0 ceph-mon[73551]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:04 compute-0 compassionate_wozniak[298267]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:29:04 compute-0 compassionate_wozniak[298267]: --> All data devices are unavailable
Oct 10 10:29:04 compute-0 systemd[1]: libpod-01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07.scope: Deactivated successfully.
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.583749735 +0000 UTC m=+0.570183279 container died 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f43355725466bf96a0ae9629ffb733660b24c13f0dec674330cd355f513fb0-merged.mount: Deactivated successfully.
Oct 10 10:29:04 compute-0 podman[298251]: 2025-10-10 10:29:04.623347234 +0000 UTC m=+0.609780728 container remove 01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 10 10:29:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:04 compute-0 systemd[1]: libpod-conmon-01bb612457ba5b4cd20f4b5edb7950bd0e8870964a1480ac55e5d239996adf07.scope: Deactivated successfully.
Oct 10 10:29:04 compute-0 sudo[298145]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:04 compute-0 sudo[298294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:29:04 compute-0 sudo[298294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:04 compute-0 sudo[298294]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:04 compute-0 sudo[298319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:29:04 compute-0 sudo[298319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.243840655 +0000 UTC m=+0.055143039 container create 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 10:29:05 compute-0 systemd[1]: Started libpod-conmon-4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba.scope.
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.212594113 +0000 UTC m=+0.023896537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:05 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.323683672 +0000 UTC m=+0.134986066 container init 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.33142258 +0000 UTC m=+0.142724954 container start 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:29:05 compute-0 strange_thompson[298403]: 167 167
Oct 10 10:29:05 compute-0 systemd[1]: libpod-4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba.scope: Deactivated successfully.
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.338502757 +0000 UTC m=+0.149805141 container attach 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.338855599 +0000 UTC m=+0.150157973 container died 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 10:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f15c141166a2ac9c14592bb83cc21156e665cb936ae01acc938e381de27c12e8-merged.mount: Deactivated successfully.
Oct 10 10:29:05 compute-0 podman[298387]: 2025-10-10 10:29:05.376278417 +0000 UTC m=+0.187580801 container remove 4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_thompson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:29:05 compute-0 systemd[1]: libpod-conmon-4034b37916a5e87b54ad0e49eb52edd9bcad3e6ce891d6ba08c8b0c3d361f0ba.scope: Deactivated successfully.
Oct 10 10:29:05 compute-0 podman[298427]: 2025-10-10 10:29:05.544096944 +0000 UTC m=+0.043150933 container create 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:29:05 compute-0 ceph-mon[73551]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:05 compute-0 systemd[1]: Started libpod-conmon-6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a.scope.
Oct 10 10:29:05 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1615cde992aa3b39f2902d9fdb1bd8c1317727a3edbbf66f2c3455179034ed38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:05 compute-0 podman[298427]: 2025-10-10 10:29:05.524409123 +0000 UTC m=+0.023463142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1615cde992aa3b39f2902d9fdb1bd8c1317727a3edbbf66f2c3455179034ed38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1615cde992aa3b39f2902d9fdb1bd8c1317727a3edbbf66f2c3455179034ed38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1615cde992aa3b39f2902d9fdb1bd8c1317727a3edbbf66f2c3455179034ed38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:05 compute-0 podman[298427]: 2025-10-10 10:29:05.634921724 +0000 UTC m=+0.133975733 container init 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:29:05 compute-0 podman[298427]: 2025-10-10 10:29:05.643424097 +0000 UTC m=+0.142478086 container start 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:29:05 compute-0 podman[298427]: 2025-10-10 10:29:05.646485655 +0000 UTC m=+0.145539664 container attach 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:29:05 compute-0 bold_snyder[298443]: {
Oct 10 10:29:05 compute-0 bold_snyder[298443]:     "0": [
Oct 10 10:29:05 compute-0 bold_snyder[298443]:         {
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "devices": [
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "/dev/loop3"
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             ],
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "lv_name": "ceph_lv0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "lv_size": "21470642176",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "name": "ceph_lv0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "tags": {
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.cluster_name": "ceph",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.crush_device_class": "",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.encrypted": "0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.osd_id": "0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.type": "block",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.vdo": "0",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:                 "ceph.with_tpm": "0"
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             },
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "type": "block",
Oct 10 10:29:05 compute-0 bold_snyder[298443]:             "vg_name": "ceph_vg0"
Oct 10 10:29:05 compute-0 bold_snyder[298443]:         }
Oct 10 10:29:05 compute-0 bold_snyder[298443]:     ]
Oct 10 10:29:05 compute-0 bold_snyder[298443]: }
Oct 10 10:29:05 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:05 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:05 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:06 compute-0 systemd[1]: libpod-6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a.scope: Deactivated successfully.
Oct 10 10:29:06 compute-0 podman[298427]: 2025-10-10 10:29:06.002650515 +0000 UTC m=+0.501704504 container died 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1615cde992aa3b39f2902d9fdb1bd8c1317727a3edbbf66f2c3455179034ed38-merged.mount: Deactivated successfully.
Oct 10 10:29:06 compute-0 podman[298427]: 2025-10-10 10:29:06.052255335 +0000 UTC m=+0.551309324 container remove 6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_snyder, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:29:06 compute-0 systemd[1]: libpod-conmon-6524c0412455bab73b8df6a3fcb7fa75c960ed51e725164259ef2894a9430f9a.scope: Deactivated successfully.
Oct 10 10:29:06 compute-0 sudo[298319]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:06 compute-0 sudo[298465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:29:06 compute-0 sudo[298465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:06 compute-0 sudo[298465]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:06 compute-0 nova_compute[261329]: 2025-10-10 10:29:06.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:06 compute-0 nova_compute[261329]: 2025-10-10 10:29:06.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:06 compute-0 sudo[298491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:29:06 compute-0 sudo[298491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:06.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.642664271 +0000 UTC m=+0.050131037 container create 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:29:06 compute-0 systemd[1]: Started libpod-conmon-8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f.scope.
Oct 10 10:29:06 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.620529652 +0000 UTC m=+0.027996508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.717655414 +0000 UTC m=+0.125122260 container init 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.727392056 +0000 UTC m=+0.134858812 container start 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:29:06 compute-0 hungry_euler[298571]: 167 167
Oct 10 10:29:06 compute-0 systemd[1]: libpod-8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f.scope: Deactivated successfully.
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.731634481 +0000 UTC m=+0.139101277 container attach 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 10:29:06 compute-0 conmon[298571]: conmon 8da2fd4780da2d500968 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f.scope/container/memory.events
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.734007687 +0000 UTC m=+0.141474483 container died 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 10:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-197efb67c78d7dee381b43674573a5c14e857bcbd83410a835e0d14f0f258b3c-merged.mount: Deactivated successfully.
Oct 10 10:29:06 compute-0 podman[298555]: 2025-10-10 10:29:06.796763068 +0000 UTC m=+0.204229824 container remove 8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_euler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:29:06 compute-0 systemd[1]: libpod-conmon-8da2fd4780da2d500968c9e165b2763cf9ff6c90177c5ea13e1877fed3945d8f.scope: Deactivated successfully.
Oct 10 10:29:06 compute-0 podman[298597]: 2025-10-10 10:29:06.968624624 +0000 UTC m=+0.051176830 container create ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:29:07 compute-0 systemd[1]: Started libpod-conmon-ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491.scope.
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:06.945881736 +0000 UTC m=+0.028433942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:29:07 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bd0eaac37f2c2d44e03f284ceada11e2957558cdfd38b78e8327d8c174b018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bd0eaac37f2c2d44e03f284ceada11e2957558cdfd38b78e8327d8c174b018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bd0eaac37f2c2d44e03f284ceada11e2957558cdfd38b78e8327d8c174b018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bd0eaac37f2c2d44e03f284ceada11e2957558cdfd38b78e8327d8c174b018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:07.074732875 +0000 UTC m=+0.157285071 container init ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:07.08334153 +0000 UTC m=+0.165893716 container start ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:07.087031818 +0000 UTC m=+0.169584054 container attach ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Oct 10 10:29:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:07 compute-0 nova_compute[261329]: 2025-10-10 10:29:07.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:07.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:29:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:07.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:29:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:07.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:29:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:29:07 compute-0 nova_compute[261329]: 2025-10-10 10:29:07.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:07 compute-0 lvm[298687]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:29:07 compute-0 lvm[298687]: VG ceph_vg0 finished
Oct 10 10:29:07 compute-0 pensive_elion[298613]: {}
Oct 10 10:29:07 compute-0 systemd[1]: libpod-ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491.scope: Deactivated successfully.
Oct 10 10:29:07 compute-0 systemd[1]: libpod-ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491.scope: Consumed 1.088s CPU time.
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:07.787234062 +0000 UTC m=+0.869786248 container died ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-51bd0eaac37f2c2d44e03f284ceada11e2957558cdfd38b78e8327d8c174b018-merged.mount: Deactivated successfully.
Oct 10 10:29:07 compute-0 podman[298597]: 2025-10-10 10:29:07.830857069 +0000 UTC m=+0.913409255 container remove ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 10:29:07 compute-0 systemd[1]: libpod-conmon-ade6cb3811b753253645f5621028c7e77ef1f7ac6401260961a18c78b69be491.scope: Deactivated successfully.
Oct 10 10:29:07 compute-0 sudo[298491]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:29:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:07 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:29:07 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:07 compute-0 sudo[298702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:29:07 compute-0 sudo[298702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:07 compute-0 sudo[298702]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:07 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:07 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:07 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:07.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:08 compute-0 ceph-mon[73551]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:08 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:08 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:08 compute-0 nova_compute[261329]: 2025-10-10 10:29:08.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:08.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:08.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2349870666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2057179231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:09 compute-0 nova_compute[261329]: 2025-10-10 10:29:09.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:09 compute-0 nova_compute[261329]: 2025-10-10 10:29:09.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:09 compute-0 nova_compute[261329]: 2025-10-10 10:29:09.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:09 compute-0 nova_compute[261329]: 2025-10-10 10:29:09.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:29:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:09 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:09 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:09 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:09.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:10 compute-0 ceph-mon[73551]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.271 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.272 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.272 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.272 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.272 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:29:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:10.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:29:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966094890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.731 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.886 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.887 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4407MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.887 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:10 compute-0 nova_compute[261329]: 2025-10-10 10:29:10.888 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.007 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.008 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.032 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:29:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1966094890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:29:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534647443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.506 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.514 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.537 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.539 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:29:11 compute-0 nova_compute[261329]: 2025-10-10 10:29:11.540 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:11 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:11 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:11 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:11.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:12 compute-0 ceph-mon[73551]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:12 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2534647443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:12.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:12 compute-0 nova_compute[261329]: 2025-10-10 10:29:12.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:12 compute-0 nova_compute[261329]: 2025-10-10 10:29:12.541 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:13 compute-0 nova_compute[261329]: 2025-10-10 10:29:13.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:13 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:13 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:13 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:14 compute-0 ceph-mon[73551]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:15 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:15 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:15 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:16 compute-0 ceph-mon[73551]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:29:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:29:16
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['.rgw.root', 'images', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms']
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:29:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:17.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:17 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1325988144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:17 compute-0 nova_compute[261329]: 2025-10-10 10:29:17.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:17 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:17 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:17 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:18 compute-0 ceph-mon[73551]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2046352366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:18 compute-0 nova_compute[261329]: 2025-10-10 10:29:18.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:18.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:19 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:19 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:19 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:20 compute-0 ceph-mon[73551]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:20.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:21 compute-0 podman[298786]: 2025-10-10 10:29:21.227735695 +0000 UTC m=+0.068688361 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:29:21 compute-0 podman[298785]: 2025-10-10 10:29:21.241405363 +0000 UTC m=+0.081516373 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 10 10:29:21 compute-0 podman[298787]: 2025-10-10 10:29:21.264298356 +0000 UTC m=+0.102790434 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 10:29:21 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:21 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:21 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:22 compute-0 ceph-mon[73551]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:22 compute-0 nova_compute[261329]: 2025-10-10 10:29:22.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:23 compute-0 sudo[298850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:23 compute-0 sudo[298850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:23 compute-0 sudo[298850]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:23 compute-0 nova_compute[261329]: 2025-10-10 10:29:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:23 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:23 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:23 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:24 compute-0 ceph-mon[73551]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:25 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:25 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:25 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:25.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:26 compute-0 ceph-mon[73551]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:27.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1412912917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:29:27 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1412912917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:29:27 compute-0 nova_compute[261329]: 2025-10-10 10:29:27.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:27 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:27 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:27 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:28 compute-0 nova_compute[261329]: 2025-10-10 10:29:28.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:28 compute-0 ceph-mon[73551]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:28.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:29.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:30.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:30 compute-0 ceph-mon[73551]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:29:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:32.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:32 compute-0 ceph-mon[73551]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:32 compute-0 nova_compute[261329]: 2025-10-10 10:29:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:33 compute-0 podman[298885]: 2025-10-10 10:29:33.226730535 +0000 UTC m=+0.066757430 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:29:33 compute-0 nova_compute[261329]: 2025-10-10 10:29:33.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:34.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:34 compute-0 ceph-mon[73551]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:36 compute-0 ceph-mon[73551]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:37.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:29:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 10 10:29:37 compute-0 nova_compute[261329]: 2025-10-10 10:29:37.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:38 compute-0 nova_compute[261329]: 2025-10-10 10:29:38.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:38 compute-0 ceph-mon[73551]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:38.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:40 compute-0 ceph-mon[73551]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:41 compute-0 ceph-mon[73551]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:29:41.917 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:29:41.918 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:29:41.918 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:42 compute-0 nova_compute[261329]: 2025-10-10 10:29:42.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:43 compute-0 nova_compute[261329]: 2025-10-10 10:29:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:43 compute-0 sudo[298914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:43 compute-0 sudo[298914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:43 compute-0 sudo[298914]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:44 compute-0 ceph-mon[73551]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:46.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:46 compute-0 ceph-mon[73551]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:29:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:29:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:29:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:47 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:47.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:47 compute-0 nova_compute[261329]: 2025-10-10 10:29:47.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:48 compute-0 ceph-mon[73551]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:48 compute-0 nova_compute[261329]: 2025-10-10 10:29:48.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:48.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:50 compute-0 ceph-mon[73551]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:52 compute-0 podman[298947]: 2025-10-10 10:29:52.252230926 +0000 UTC m=+0.080407057 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:29:52 compute-0 podman[298949]: 2025-10-10 10:29:52.258977453 +0000 UTC m=+0.078521937 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:29:52 compute-0 podman[298950]: 2025-10-10 10:29:52.307224809 +0000 UTC m=+0.132389683 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 10 10:29:52 compute-0 ceph-mon[73551]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:29:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:29:52 compute-0 nova_compute[261329]: 2025-10-10 10:29:52.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:53 compute-0 nova_compute[261329]: 2025-10-10 10:29:53.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:54 compute-0 ceph-mon[73551]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:56 compute-0 ceph-mon[73551]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:29:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:29:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:57.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:29:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 10 10:29:57 compute-0 nova_compute[261329]: 2025-10-10 10:29:57.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:58 compute-0 ceph-mon[73551]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:58 compute-0 nova_compute[261329]: 2025-10-10 10:29:58.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:29:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:58.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:58.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:29:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:29:58.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:29:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-0 ceph-mon[73551]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0 is in error state
Oct 10 10:30:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:00.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:00 compute-0 ceph-mon[73551]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:00 compute-0 ceph-mon[73551]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-0 ceph-mon[73551]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-0 ceph-mon[73551]:     daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0 is in error state
Oct 10 10:30:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:30:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:01 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:02.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:02 compute-0 ceph-mon[73551]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:02 compute-0 nova_compute[261329]: 2025-10-10 10:30:02.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:03 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:03 compute-0 nova_compute[261329]: 2025-10-10 10:30:03.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:03 compute-0 sudo[299021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:03 compute-0 sudo[299021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:03 compute-0 sudo[299021]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:03 compute-0 podman[299045]: 2025-10-10 10:30:03.577592532 +0000 UTC m=+0.054988204 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:30:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:04 compute-0 nova_compute[261329]: 2025-10-10 10:30:04.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:04 compute-0 nova_compute[261329]: 2025-10-10 10:30:04.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:30:04 compute-0 nova_compute[261329]: 2025-10-10 10:30:04.237 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:30:04 compute-0 nova_compute[261329]: 2025-10-10 10:30:04.263 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:30:04 compute-0 ceph-mon[73551]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:04 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:04 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:04 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:04 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:05 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:06 compute-0 ceph-mon[73551]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:06 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:06 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:06 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:07 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:07.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:07 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:30:07 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Oct 10 10:30:07 compute-0 nova_compute[261329]: 2025-10-10 10:30:07.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:08.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:08 compute-0 nova_compute[261329]: 2025-10-10 10:30:08.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:08 compute-0 nova_compute[261329]: 2025-10-10 10:30:08.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:08 compute-0 nova_compute[261329]: 2025-10-10 10:30:08.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:08 compute-0 sudo[299073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:08 compute-0 sudo[299073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:08 compute-0 sudo[299073]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:08 compute-0 sudo[299098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:30:08 compute-0 sudo[299098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:08 compute-0 nova_compute[261329]: 2025-10-10 10:30:08.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:08 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:08 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:08 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:08 compute-0 ceph-mon[73551]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:08 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1914201702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:08 compute-0 sudo[299098]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:08 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:08.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:08 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 10 10:30:08 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:09 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:09 compute-0 nova_compute[261329]: 2025-10-10 10:30:09.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:09 compute-0 nova_compute[261329]: 2025-10-10 10:30:09.237 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:09 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:09 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/919033109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:09 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:10.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.238 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.238 2 DEBUG nova.compute.manager [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.239 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.261 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.262 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.262 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.262 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.262 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:30:10 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:10 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:10 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:10 compute-0 ceph-mon[73551]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 10 10:30:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 10 10:30:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:30:10 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921870048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.736 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:30:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 10 10:30:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:10 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 10 10:30:10 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.895 2 WARNING nova.virt.libvirt.driver [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.896 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.896 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.896 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.956 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.957 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:30:10 compute-0 nova_compute[261329]: 2025-10-10 10:30:10.969 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:30:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445838709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:11 compute-0 nova_compute[261329]: 2025-10-10 10:30:11.432 2 DEBUG oslo_concurrency.processutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:30:11 compute-0 nova_compute[261329]: 2025-10-10 10:30:11.438 2 DEBUG nova.compute.provider_tree [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed in ProviderTree for provider: 5b1ab6df-62aa-4a93-8e24-04440191f108 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:30:11 compute-0 nova_compute[261329]: 2025-10-10 10:30:11.457 2 DEBUG nova.scheduler.client.report [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Inventory has not changed for provider 5b1ab6df-62aa-4a93-8e24-04440191f108 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:30:11 compute-0 nova_compute[261329]: 2025-10-10 10:30:11.459 2 DEBUG nova.compute.resource_tracker [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:30:11 compute-0 nova_compute[261329]: 2025-10-10 10:30:11.460 2 DEBUG oslo_concurrency.lockutils [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1921870048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3445838709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:30:11 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 721 B/s rd, 0 op/s
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:30:11 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:30:11 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:11 compute-0 sudo[299203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:11 compute-0 sudo[299203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:11 compute-0 sudo[299203]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:11 compute-0 sudo[299228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 10:30:11 compute-0 sudo[299228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.306057225 +0000 UTC m=+0.041608634 container create ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:30:12 compute-0 systemd[1]: Started libpod-conmon-ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3.scope.
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.287143729 +0000 UTC m=+0.022695168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.413121475 +0000 UTC m=+0.148672944 container init ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.421262686 +0000 UTC m=+0.156814085 container start ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.424564732 +0000 UTC m=+0.160116221 container attach ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:30:12 compute-0 brave_matsumoto[299311]: 167 167
Oct 10 10:30:12 compute-0 systemd[1]: libpod-ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3.scope: Deactivated successfully.
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.427170985 +0000 UTC m=+0.162722384 container died ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-90248db31b0675fee00537bb0f73c8d4c20996ee696669ebdeed436541d939ef-merged.mount: Deactivated successfully.
Oct 10 10:30:12 compute-0 nova_compute[261329]: 2025-10-10 10:30:12.459 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:12 compute-0 podman[299294]: 2025-10-10 10:30:12.46382725 +0000 UTC m=+0.199378639 container remove ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_matsumoto, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 10 10:30:12 compute-0 systemd[1]: libpod-conmon-ae9c5703c970ac4d787b2e83426e7d54162cbf54570774c58b2240a6e71674e3.scope: Deactivated successfully.
Oct 10 10:30:12 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:12 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:12 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:12 compute-0 nova_compute[261329]: 2025-10-10 10:30:12.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:12 compute-0 podman[299337]: 2025-10-10 10:30:12.613632219 +0000 UTC m=+0.040634022 container create d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Oct 10 10:30:12 compute-0 systemd[1]: Started libpod-conmon-d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149.scope.
Oct 10 10:30:12 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:30:12 compute-0 ceph-mon[73551]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:30:12 compute-0 ceph-mon[73551]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 721 B/s rd, 0 op/s
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:30:12 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:12 compute-0 podman[299337]: 2025-10-10 10:30:12.681958149 +0000 UTC m=+0.108959972 container init d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:30:12 compute-0 podman[299337]: 2025-10-10 10:30:12.594097903 +0000 UTC m=+0.021099736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:12 compute-0 podman[299337]: 2025-10-10 10:30:12.689902353 +0000 UTC m=+0.116904146 container start d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:30:12 compute-0 podman[299337]: 2025-10-10 10:30:12.69293128 +0000 UTC m=+0.119933113 container attach d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 10:30:13 compute-0 optimistic_buck[299353]: --> passed data devices: 0 physical, 1 LVM
Oct 10 10:30:13 compute-0 optimistic_buck[299353]: --> All data devices are unavailable
Oct 10 10:30:13 compute-0 systemd[1]: libpod-d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149.scope: Deactivated successfully.
Oct 10 10:30:13 compute-0 podman[299337]: 2025-10-10 10:30:13.047128678 +0000 UTC m=+0.474130471 container died d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f27727f7f05eec09c5e4e2f013576006bc9bf5e1ca2e3c1ecc747144ad820ac3-merged.mount: Deactivated successfully.
Oct 10 10:30:13 compute-0 podman[299337]: 2025-10-10 10:30:13.092699278 +0000 UTC m=+0.519701081 container remove d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 10:30:13 compute-0 systemd[1]: libpod-conmon-d8ebfd00dd04b2eb4b99aeca36252c1245f130510d3be5445e322d9352ac3149.scope: Deactivated successfully.
Oct 10 10:30:13 compute-0 sudo[299228]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:13 compute-0 sudo[299383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:13 compute-0 sudo[299383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:13 compute-0 sudo[299383]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:13 compute-0 sudo[299408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 10:30:13 compute-0 sudo[299408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:13 compute-0 nova_compute[261329]: 2025-10-10 10:30:13.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:13 compute-0 podman[299476]: 2025-10-10 10:30:13.689392936 +0000 UTC m=+0.057293507 container create 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 10:30:13 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:13 compute-0 systemd[1]: Started libpod-conmon-3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9.scope.
Oct 10 10:30:13 compute-0 podman[299476]: 2025-10-10 10:30:13.662215446 +0000 UTC m=+0.030116037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:13 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:13 compute-0 podman[299476]: 2025-10-10 10:30:13.793563164 +0000 UTC m=+0.161463785 container init 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:30:13 compute-0 podman[299476]: 2025-10-10 10:30:13.802537761 +0000 UTC m=+0.170438342 container start 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:30:13 compute-0 podman[299476]: 2025-10-10 10:30:13.806741466 +0000 UTC m=+0.174642057 container attach 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:30:13 compute-0 systemd[1]: libpod-3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9.scope: Deactivated successfully.
Oct 10 10:30:13 compute-0 awesome_dhawan[299492]: 167 167
Oct 10 10:30:13 compute-0 conmon[299492]: conmon 3208534dc7c6252b3ea0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9.scope/container/memory.events
Oct 10 10:30:13 compute-0 podman[299497]: 2025-10-10 10:30:13.861240822 +0000 UTC m=+0.033850105 container died 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 10:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab4db4919913173c8fc4b2774adc18db9da4ed108b0145217cb5baa5a7bc4db5-merged.mount: Deactivated successfully.
Oct 10 10:30:13 compute-0 podman[299497]: 2025-10-10 10:30:13.905831391 +0000 UTC m=+0.078440614 container remove 3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:30:13 compute-0 systemd[1]: libpod-conmon-3208534dc7c6252b3ea0c7b4d9fa5d4d2f2af9bbc5f430849883dcd822bbd5f9.scope: Deactivated successfully.
Oct 10 10:30:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:14.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.111880162 +0000 UTC m=+0.047926897 container create 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:30:14 compute-0 systemd[1]: Started libpod-conmon-53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45.scope.
Oct 10 10:30:14 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9c0fa0724356d31837953387c7a2c330471ca7183969e0158e25bb627a5380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9c0fa0724356d31837953387c7a2c330471ca7183969e0158e25bb627a5380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9c0fa0724356d31837953387c7a2c330471ca7183969e0158e25bb627a5380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9c0fa0724356d31837953387c7a2c330471ca7183969e0158e25bb627a5380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.093150922 +0000 UTC m=+0.029197687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.194654814 +0000 UTC m=+0.130701609 container init 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.200384547 +0000 UTC m=+0.136431292 container start 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.203432315 +0000 UTC m=+0.139479080 container attach 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]: {
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:     "0": [
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:         {
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "devices": [
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "/dev/loop3"
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             ],
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "lv_name": "ceph_lv0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "lv_size": "21470642176",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c307f4a4-39e7-4a9c-9d19-a2b8712089ab,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "lv_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "name": "ceph_lv0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "tags": {
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.block_uuid": "7iLAkm-K8ng-Mg2c-TmOt-E1VS-pVcN-eHoyYh",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.cluster_name": "ceph",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.crush_device_class": "",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.encrypted": "0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.osd_fsid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.osd_id": "0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.type": "block",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.vdo": "0",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:                 "ceph.with_tpm": "0"
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             },
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "type": "block",
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:             "vg_name": "ceph_vg0"
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:         }
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]:     ]
Oct 10 10:30:14 compute-0 trusting_stonebraker[299538]: }
Oct 10 10:30:14 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:14 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:14 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:14 compute-0 systemd[1]: libpod-53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45.scope: Deactivated successfully.
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.522643932 +0000 UTC m=+0.458690737 container died 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a9c0fa0724356d31837953387c7a2c330471ca7183969e0158e25bb627a5380-merged.mount: Deactivated successfully.
Oct 10 10:30:14 compute-0 podman[299520]: 2025-10-10 10:30:14.576006392 +0000 UTC m=+0.512053177 container remove 53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:30:14 compute-0 systemd[1]: libpod-conmon-53019ba834292a86d92df854f47795ccc789363cf03e8e1f5b077b670719ee45.scope: Deactivated successfully.
Oct 10 10:30:14 compute-0 sudo[299408]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:14 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:14 compute-0 sudo[299560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:14 compute-0 sudo[299560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:14 compute-0 sudo[299560]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:14 compute-0 ceph-mon[73551]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:14 compute-0 sudo[299585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 10:30:14 compute-0 sudo[299585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.230259465 +0000 UTC m=+0.049946872 container create 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 10:30:15 compute-0 systemd[1]: Started libpod-conmon-9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197.scope.
Oct 10 10:30:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.204772778 +0000 UTC m=+0.024460265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.309848704 +0000 UTC m=+0.129536111 container init 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.317776828 +0000 UTC m=+0.137464225 container start 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.320762724 +0000 UTC m=+0.140450131 container attach 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:30:15 compute-0 goofy_chaum[299668]: 167 167
Oct 10 10:30:15 compute-0 systemd[1]: libpod-9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197.scope: Deactivated successfully.
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.325802885 +0000 UTC m=+0.145490302 container died 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca679cd21a868b5ba533ae8f3f53b09d1ff116d2b20eb8595ae3f3923ce9dc80-merged.mount: Deactivated successfully.
Oct 10 10:30:15 compute-0 podman[299652]: 2025-10-10 10:30:15.361101236 +0000 UTC m=+0.180788643 container remove 9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 10:30:15 compute-0 systemd[1]: libpod-conmon-9d0ea86402cb070d5a3a13a18819abb8f38e7865dfe0e21fce9aac9902d28197.scope: Deactivated successfully.
Oct 10 10:30:15 compute-0 podman[299692]: 2025-10-10 10:30:15.57094562 +0000 UTC m=+0.050015534 container create aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:30:15 compute-0 systemd[1]: Started libpod-conmon-aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f.scope.
Oct 10 10:30:15 compute-0 podman[299692]: 2025-10-10 10:30:15.548297434 +0000 UTC m=+0.027367338 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:30:15 compute-0 systemd[1]: Started libcrun container.
Oct 10 10:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5d43b6fee077e4dfbcc85dfad76df28d7972e65fba74cb257697ed853561e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5d43b6fee077e4dfbcc85dfad76df28d7972e65fba74cb257697ed853561e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5d43b6fee077e4dfbcc85dfad76df28d7972e65fba74cb257697ed853561e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5d43b6fee077e4dfbcc85dfad76df28d7972e65fba74cb257697ed853561e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:30:15 compute-0 podman[299692]: 2025-10-10 10:30:15.670823179 +0000 UTC m=+0.149893113 container init aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:30:15 compute-0 podman[299692]: 2025-10-10 10:30:15.677202834 +0000 UTC m=+0.156272718 container start aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:30:15 compute-0 podman[299692]: 2025-10-10 10:30:15.680593243 +0000 UTC m=+0.159663127 container attach aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 10:30:15 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:16.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:16 compute-0 lvm[299784]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:30:16 compute-0 lvm[299784]: VG ceph_vg0 finished
Oct 10 10:30:16 compute-0 dreamy_chandrasekhar[299708]: {}
Oct 10 10:30:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:30:16 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:16 compute-0 systemd[1]: libpod-aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f.scope: Deactivated successfully.
Oct 10 10:30:16 compute-0 systemd[1]: libpod-aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f.scope: Consumed 1.125s CPU time.
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Optimize plan auto_2025-10-10_10:30:16
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 10:30:16 compute-0 podman[299692]: 2025-10-10 10:30:16.402240524 +0000 UTC m=+0.881310458 container died aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [balancer INFO root] do_upmap
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', '.nfs', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 10 10:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a5d43b6fee077e4dfbcc85dfad76df28d7972e65fba74cb257697ed853561e-merged.mount: Deactivated successfully.
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:16 compute-0 ceph-mgr[73845]: [balancer INFO root] prepared 0/10 upmap changes
Oct 10 10:30:16 compute-0 podman[299692]: 2025-10-10 10:30:16.451742039 +0000 UTC m=+0.930811923 container remove aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:30:16 compute-0 systemd[1]: libpod-conmon-aa06ae3b090d2db2247e21547394e8074b93ed65292d1ad0b92cb924762c036f.scope: Deactivated successfully.
Oct 10 10:30:16 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:16 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:16 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:16 compute-0 sudo[299585]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 10 10:30:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:16 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 10 10:30:16 compute-0 ceph-mon[73551]: log_channel(audit) log [INF] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:16 compute-0 sudo[299803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:30:16 compute-0 sudo[299803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:16 compute-0 sudo[299803]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:16 compute-0 ceph-mon[73551]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:16 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 10:30:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:17.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:30:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:17.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:30:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:17.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:30:17 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:17 compute-0 nova_compute[261329]: 2025-10-10 10:30:17.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:17 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:17 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/862817915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:18 compute-0 nova_compute[261329]: 2025-10-10 10:30:18.234 2 DEBUG oslo_service.periodic_task [None req-3cee204a-855e-4033-bb9f-ee722a768f4d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:18 compute-0 nova_compute[261329]: 2025-10-10 10:30:18.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:18 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:18 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:18 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:18 compute-0 ceph-mon[73551]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:18 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/832635326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:18.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:30:18 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:19 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:19 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:20 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:20 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:20 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:20 compute-0 ceph-mon[73551]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:21 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 10 10:30:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:22.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:22 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:22 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:22 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:22 compute-0 nova_compute[261329]: 2025-10-10 10:30:22.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:22 compute-0 ceph-mon[73551]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 10 10:30:23 compute-0 podman[299837]: 2025-10-10 10:30:23.225283339 +0000 UTC m=+0.060695246 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 10 10:30:23 compute-0 podman[299836]: 2025-10-10 10:30:23.226622452 +0000 UTC m=+0.064515658 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:30:23 compute-0 podman[299838]: 2025-10-10 10:30:23.251294652 +0000 UTC m=+0.080404857 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 10:30:23 compute-0 nova_compute[261329]: 2025-10-10 10:30:23.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:23 compute-0 sudo[299902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:23 compute-0 sudo[299902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:23 compute-0 sudo[299902]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:23 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:24 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:24 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:24 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:24 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:24 compute-0 ceph-mon[73551]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:25 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:26 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:26 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:26 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:26 compute-0 ceph-mon[73551]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1496174560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:30:26 compute-0 ceph-mon[73551]: from='client.? 192.168.122.10:0/1496174560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:30:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:27.277Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:30:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:27.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 10 10:30:27 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:27 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:27 compute-0 nova_compute[261329]: 2025-10-10 10:30:27.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:27 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:28 compute-0 nova_compute[261329]: 2025-10-10 10:30:28.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:28 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:28 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:28 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:28 compute-0 ceph-mon[73551]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:28 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:29 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:29 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:30 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:30 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:30 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:30.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:30 compute-0 ceph-mon[73551]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:31 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:30:31 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:31 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:31 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:32 compute-0 nova_compute[261329]: 2025-10-10 10:30:32.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:32 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:32 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:32 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:32.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:32 compute-0 ceph-mon[73551]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:33 compute-0 nova_compute[261329]: 2025-10-10 10:30:33.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:33 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:34 compute-0 sshd-session[299937]: Accepted publickey for zuul from 192.168.122.10 port 49106 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:30:34 compute-0 systemd-logind[806]: New session 61 of user zuul.
Oct 10 10:30:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:34 compute-0 systemd[1]: Started Session 61 of User zuul.
Oct 10 10:30:34 compute-0 sshd-session[299937]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:30:34 compute-0 podman[299939]: 2025-10-10 10:30:34.115299599 +0000 UTC m=+0.071622296 container health_status e18404962f888f09b32ff3618b289b19acca659b3b1e53be91bef79b4939aff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 10 10:30:34 compute-0 sudo[299960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 10 10:30:34 compute-0 sudo[299960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:30:34 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:34 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:34 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:34.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:34 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:34 compute-0 ceph-mon[73551]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:35 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:36 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:36 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:36 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:36 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27688 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:36 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:36 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18153 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:36 compute-0 ceph-mon[73551]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:37 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27703 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:37.278Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:30:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:37.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:37 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18159 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:37 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 10 10:30:37 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26975 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 nova_compute[261329]: 2025-10-10 10:30:37.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:37 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:37 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:30:37 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43274649' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.27688 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.18153 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.27703 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.18159 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.26975 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1165768626' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:37 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/43274649' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:38 compute-0 nova_compute[261329]: 2025-10-10 10:30:38.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:38 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:38 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:38 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:38.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:38 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:38.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:38 compute-0 ceph-mon[73551]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:38 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3130338205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:39 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.667550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239667574, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1443, "num_deletes": 258, "total_data_size": 2597192, "memory_usage": 2645504, "flush_reason": "Manual Compaction"}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239682848, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2541098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36937, "largest_seqno": 38378, "table_properties": {"data_size": 2534363, "index_size": 3806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14470, "raw_average_key_size": 20, "raw_value_size": 2520675, "raw_average_value_size": 3486, "num_data_blocks": 164, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760092107, "oldest_key_time": 1760092107, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15354 microseconds, and 5638 cpu microseconds.
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.682900) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2541098 bytes OK
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.682922) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.685759) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.685779) EVENT_LOG_v1 {"time_micros": 1760092239685774, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.685798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2590949, prev total WAL file size 2590949, number of live WAL files 2.
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.686811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2481KB)], [80(12MB)]
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239686849, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15157375, "oldest_snapshot_seqno": -1}
Oct 10 10:30:39 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6864 keys, 14996028 bytes, temperature: kUnknown
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239769431, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14996028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14950700, "index_size": 27040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 180307, "raw_average_key_size": 26, "raw_value_size": 14827531, "raw_average_value_size": 2160, "num_data_blocks": 1069, "num_entries": 6864, "num_filter_entries": 6864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089395, "oldest_key_time": 0, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b11c2339-35ff-491c-b185-eda5e2ea0ba8", "db_session_id": "X51S9MA51CSPL9DJ2ZU1", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.769808) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14996028 bytes
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.771353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.1 rd, 181.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 7398, records dropped: 534 output_compression: NoCompression
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.771374) EVENT_LOG_v1 {"time_micros": 1760092239771363, "job": 46, "event": "compaction_finished", "compaction_time_micros": 82802, "compaction_time_cpu_micros": 38483, "output_level": 6, "num_output_files": 1, "total_output_size": 14996028, "num_input_records": 7398, "num_output_records": 6864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239772414, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239775282, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.686718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.775514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.775519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.775521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.775522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-0 ceph-mon[73551]: rocksdb: (Original Log Time 2025/10/10-10:30:39.775523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:40.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:40 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:40 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:40 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:40 compute-0 ceph-mon[73551]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:40 compute-0 ovs-vsctl[300285]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 10:30:41 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:41 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 10:30:41 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 10:30:41 compute-0 virtqemud[260504]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:30:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:30:41.919 162925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:30:41.919 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:41 compute-0 ovn_metadata_agent[162919]: 2025-10-10 10:30:41.919 162925 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:42.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:42 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: cache status {prefix=cache status} (starting...)
Oct 10 10:30:42 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:42 compute-0 nova_compute[261329]: 2025-10-10 10:30:42.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:42 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:42 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:42 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:42 compute-0 lvm[300579]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:30:42 compute-0 lvm[300579]: VG ceph_vg0 finished
Oct 10 10:30:42 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27724 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:42 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: client ls {prefix=client ls} (starting...)
Oct 10 10:30:42 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:42 compute-0 ceph-mon[73551]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:42 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:30:42 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27736 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18186 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 nova_compute[261329]: 2025-10-10 10:30:43.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27751 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:30:43 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271142430' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:43 compute-0 sudo[300828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:43 compute-0 sudo[300828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:43 compute-0 sudo[300828]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.27724 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/561687218' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.27736 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1504914682' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.18186 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.27751 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4271142430' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/171472224' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18204 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27763 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.26996 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:30:43 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593539214' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 10:30:43 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:30:43 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18222 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 10:30:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064114185' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27008 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27808 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:44 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:44 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: ops {prefix=ops} (starting...)
Oct 10 10:30:44 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18243 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27029 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 10:30:44 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558230964' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.18204 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.27763 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.26996 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3593539214' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/206671409' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2269363297' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.18222 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1064114185' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.27008 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/17616546' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2810672141' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.27808 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1752163504' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3250802086' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2558230964' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:44 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27832 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132749714' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27044 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27850 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/731166632' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: session ls {prefix=session ls} (starting...)
Oct 10 10:30:45 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo Can't run that command on an inactive MDS!
Oct 10 10:30:45 compute-0 ceph-mds[96159]: mds.cephfs.compute-0.cchwlo asok_command: status {prefix=status} (starting...)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565812220' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18297 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.18243 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.27029 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.27832 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/132749714' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.27044 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.27850 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/731166632' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2494599266' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3206344806' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/565812220' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3277923492' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2827576184' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2830212554' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:45 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:30:45 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111998569' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1189640479' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27092 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27901 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:30:46.121+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110223178' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461548462' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:46 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:46 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:46 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:46.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.18297 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.27077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3111998569' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1189640479' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3172139523' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2107248748' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.27092 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.27901 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/110223178' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3723288549' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3461548462' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/547263865' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3113939320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2575731666' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:30:46 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006396374' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18363 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:30:46.899+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:46 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 10:30:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498770159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:47.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 10 10:30:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:47.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 10:30:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040415731' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27140 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: 2025-10-10T10:30:47.423+0000 7f4fc5754640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 10 10:30:47 compute-0 nova_compute[261329]: 2025-10-10 10:30:47.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27967 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:47 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 10:30:47 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898906544' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3006396374' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.18363 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1715209047' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2912853044' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3051349397' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/910388127' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3498770159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.27949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2040415731' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3858622949' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.27140 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4142265369' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1724959652' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/683696595' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2898906544' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18420 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27979 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:30:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585167422' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:48 compute-0 nova_compute[261329]: 2025-10-10 10:30:48.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:48 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:48 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:48.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27179 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:30:48 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231707591' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18468 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.27967 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1085433634' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/149990470' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.18420 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.27979 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1680992890' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1585167422' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2303646199' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2469062832' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3231707591' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2013373529' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:48 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:48.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:54.631563+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:55.631710+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1228800 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:56.631916+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:57.632105+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:58.632241+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:59.632381+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:00.632507+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:01.632657+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:02.632815+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:03.632960+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:04.633143+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:05.633345+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:06.633487+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:07.633655+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:08.633874+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:09.634054+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:10.634210+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:11.634354+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1220608 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:12.634550+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:13.634679+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:14.634851+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:15.635081+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:16.635378+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:17.635590+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:18.635815+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:19.636047+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:20.636283+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:21.636460+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:22.636775+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:23.637014+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:24.637265+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:25.637444+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [3])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:26.637592+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:27.637766+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:28.637944+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:29.638093+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:30.638236+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1204224 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:31.638418+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:32.638595+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:33.638778+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:34.638979+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:35.639147+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:36.639283+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:37.639501+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:38.639669+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:39.639856+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:40.640017+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:41.640181+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1187840 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:42.640411+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:43.640570+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:44.640728+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:45.640912+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:46.641124+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:47.641423+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:48.641715+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:49.641965+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:50.642142+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1179648 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:51.642375+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:52.642529+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:53.642687+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:54.642834+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:55.642979+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:56.643136+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:57.643334+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:58.643486+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:59.643600+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:00.643749+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:01.643914+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:02.644080+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:03.644230+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:04.644479+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:05.644661+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:06.644825+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:07.645025+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:08.645179+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:09.645367+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:10.645521+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1163264 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:11.645661+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:12.645862+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:13.646063+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:14.646215+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1146880 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:15.646435+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:16.646697+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:17.646906+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:18.647041+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:19.647251+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1138688 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:20.647389+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:21.647637+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:22.647793+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:23.648006+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:24.648170+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:25.648345+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:26.648592+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:27.648769+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:28.648914+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:29.649052+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:30.649212+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:31.649450+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:32.649741+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:33.649874+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:34.650064+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:35.650223+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:36.650387+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:37.650545+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:38.650696+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:39.650852+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:40.651012+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:41.651162+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:42.651411+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:43.651606+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:44.651798+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:45.652024+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a91942000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:46.652172+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:47.652334+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:48.652498+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:49.652783+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:50.653084+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:51.653250+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1089536 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941998 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:52.653385+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:53.653607+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:54.653757+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:55.653895+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 166.029067993s of 166.031768799s, submitted: 1
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:56.654078+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91045860
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942130 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:57.654247+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:58.654451+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1073152 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:59.654605+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:00.654784+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:01.654922+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943658 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:02.655076+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:03.655268+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:04.655437+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:05.655674+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:06.655851+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943658 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:07.656084+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1064960 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.241256714s of 11.261567116s, submitted: 5
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:08.656384+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 1048576 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:09.656574+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1040384 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:10.656769+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1040384 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:11.656937+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 999424 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943790 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:12.657147+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:13.657316+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:14.657505+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:15.657686+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:16.657846+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943820 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:17.658097+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:18.658378+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:19.658648+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:20.658836+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:21.658971+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.572020531s of 14.684833527s, submitted: 16
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:22.659186+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:23.659397+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:24.659559+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:25.660097+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:26.660273+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 950272 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:27.660466+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907ca000 session 0x562a919950e0
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:28.660644+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:29.660801+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:30.660959+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:31.661190+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:32.661351+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:33.661487+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:34.661665+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce400 session 0x562a900ac960
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:35.661975+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:36.662146+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943840 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:37.662535+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:38.662742+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.899217606s of 16.903829575s, submitted: 1
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:39.662926+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:40.663114+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:41.663401+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:42.663548+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945500 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:43.663680+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:44.664484+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:45.664629+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:46.664760+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:47.664950+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945632 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:48.665091+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:49.665349+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:50.665598+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 827392 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:51.665773+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.463492393s of 12.501100540s, submitted: 11
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:52.665916+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946553 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:53.666121+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:54.666297+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:55.666428+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:56.666688+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:57.666901+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946405 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:58.667084+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a91931680
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 729088 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:59.667393+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:00.667565+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:01.667708+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 720896 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:02.667958+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945814 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.818078995s of 10.910258293s, submitted: 11
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:03.668117+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:04.668277+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:05.668472+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:06.668660+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:07.668871+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945682 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:08.669045+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 704512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:09.669216+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90539c00
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:10.669423+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:11.669562+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 688128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:12.669745+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945830 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:13.670002+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:14.670137+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 663552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:15.670307+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.875679970s of 12.895929337s, submitted: 6
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 655360 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:16.670495+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:17.670695+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945830 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:18.670908+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:19.671057+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:20.671223+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 647168 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:21.671428+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:22.671606+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945223 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:23.672215+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 630784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:24.672714+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:25.673233+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cac00 session 0x562a8ef27c20
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca000
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:26.673420+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 606208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:27.673588+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 598016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90860400 session 0x562a8ef4b680
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cb400
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:28.673778+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:29.674016+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:30.674131+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 581632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:31.674315+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 573440 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:32.674490+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:33.674618+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:34.674765+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:35.674920+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:36.675094+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:37.675267+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:38.676865+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:39.677018+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:40.677148+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91974d20
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:41.677269+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:42.677396+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:43.677517+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:44.677698+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:45.677883+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:46.678088+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:47.678273+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945091 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:48.678442+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:49.678568+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:50.678754+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.936725616s of 35.959014893s, submitted: 6
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:51.678913+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 557056 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:52.679071+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945223 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 540672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:53.679264+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 540672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:54.679412+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 532480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:55.679574+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 532480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:56.679731+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 524288 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:57.679946+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946751 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:58.680122+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:59.680266+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 507904 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:00.680430+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:01.680674+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:02.680844+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946751 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 491520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:03.680985+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 475136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.957537651s of 12.999587059s, submitted: 11
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:04.681119+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:05.681484+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:06.681659+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:07.681858+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946451 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:08.681997+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:09.682156+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:10.682360+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:11.682506+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:12.682636+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:13.682788+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:14.682934+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:15.683192+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:16.683351+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:17.683843+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:18.684016+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:19.684238+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:20.684440+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:21.684619+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:22.684815+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:23.684987+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:24.685164+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:25.685436+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:26.685571+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:27.685836+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:28.686086+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:29.686344+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:30.686522+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:31.686655+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:32.686909+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:33.687209+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a8ef27e00
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:34.687407+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:35.687566+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:36.687721+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:37.687935+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:38.688495+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:39.688721+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:40.688892+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:41.689125+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:42.689810+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946603 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:43.689970+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:44.690453+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.940834045s of 40.944816589s, submitted: 1
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:45.690864+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:46.691101+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 425984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:47.691357+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946735 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 425984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:48.691508+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:49.691708+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:50.691870+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cbc00 session 0x562a9145d860
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:51.692089+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:52.692251+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948263 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:53.692406+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:54.692567+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:55.692800+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:56.692959+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:57.693110+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947656 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:58.693290+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:59.693443+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:00.693597+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.211167336s of 15.258452415s, submitted: 10
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:01.693732+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:02.693903+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:48 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:48 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947656 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:03.694062+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 458752 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:04.694222+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:05.694382+0000)
Oct 10 10:30:48 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:48 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27191 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:06.694522+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 466944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:07.694736+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950696 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:08.694918+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:09.695058+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 450560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:10.695224+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:11.695443+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:12.695607+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950696 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.754131317s of 12.794165611s, submitted: 12
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:13.695743+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:14.695981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:15.696136+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:16.696300+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:17.696589+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:18.696753+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:19.696913+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:20.697050+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:21.697224+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:22.697436+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:23.697575+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:24.697715+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:25.697874+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a9190b860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:26.698021+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:27.698212+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:28.698399+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:29.698563+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:30.698686+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:31.698815+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:32.699003+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949957 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:33.699142+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:34.699363+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:35.699638+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:36.699811+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 442368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.348936081s of 23.356918335s, submitted: 2
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,1])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:37.700122+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950089 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:38.700295+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:39.700531+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 434176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:40.700725+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:41.700926+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:42.701112+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 417792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950105 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:43.701292+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:44.701501+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:45.701640+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:46.701788+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:47.702000+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1466368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949346 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:48.702162+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.013302803s of 12.052791595s, submitted: 10
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:49.702317+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:50.702623+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:51.702851+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:52.703064+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:53.703210+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:54.703373+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:55.703508+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:56.703694+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:57.703921+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90f18400 session 0x562a91995c20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:58.704203+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90861400 session 0x562a9145dc20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:59.704423+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:00.704606+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:01.704822+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:02.704971+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:03.705258+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:04.708422+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:05.708564+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:06.708760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1458176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:07.708975+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:08.709118+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.434091568s of 20.441444397s, submitted: 2
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:09.709270+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:10.709494+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:11.709835+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1449984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:12.710016+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949055 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:13.710424+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:14.710609+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:15.710788+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:16.711024+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:17.711434+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949055 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:18.711636+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1433600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:19.711826+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1425408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:20.712056+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1425408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:21.712268+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:22.712463+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28015 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7747 writes, 31K keys, 7747 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7747 writes, 1564 syncs, 4.95 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 695 writes, 1219 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 695 writes, 338 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.178       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd449b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562a8cd45350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949039 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:23.712625+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:24.712921+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1417216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.051300049s of 16.089799881s, submitted: 12
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:25.713190+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1409024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000044s
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:26.713411+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:27.713617+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:28.713821+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:29.713976+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:30.714203+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:31.714391+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:32.714604+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:33.714762+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:34.714913+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:35.715068+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:36.715284+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:37.715568+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:38.715725+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:39.715917+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:40.716101+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:41.716815+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:42.717235+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1400832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:43.718464+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:44.718597+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:45.718701+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:46.718834+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:47.719029+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:48.719183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:49.719373+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:50.719532+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:51.719705+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:52.719876+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:53.720034+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:54.720428+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:55.720711+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:56.720864+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:57.721158+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:58.721376+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:59.721578+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:00.721912+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:01.722197+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:02.722401+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:03.722747+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:04.723106+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:05.723359+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1384448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:06.723503+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:07.723687+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:08.723861+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:09.724010+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:10.724153+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:11.724299+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:12.724487+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:13.724622+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:14.724745+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:15.724902+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:16.725102+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1376256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:17.725296+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:18.725457+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:19.725616+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:20.725801+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:21.725973+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:22.726107+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:23.726305+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1368064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:24.726498+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:25.726637+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:26.726792+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:27.727109+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:28.727315+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:29.727583+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:30.727822+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:31.728037+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:32.728235+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:33.728398+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:34.728550+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:35.728774+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1359872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:36.728925+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:37.729191+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:38.729426+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:39.729642+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:40.729869+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:41.730079+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:42.730302+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:43.730519+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:44.751112+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:45.751483+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:46.751806+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:47.752009+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a904a4000 session 0x562a915d5680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:48.752155+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:49.752414+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:50.752697+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:51.752923+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:52.753144+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:53.753418+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:54.753687+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:55.753903+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:56.754089+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:57.754464+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:58.754705+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90539c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 93.973350525s of 93.981391907s, submitted: 2
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:59.754940+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:00.755164+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:01.755433+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:02.755666+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:03.755864+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:04.756268+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 1351680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:05.756423+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:06.756693+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:07.757105+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:08.757302+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:09.757563+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:10.757858+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:11.758088+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1343488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:12.758298+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1327104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:13.759112+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948923 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1327104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.952961922s of 15.011597633s, submitted: 9
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:14.759704+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1318912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:15.760914+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:16.761182+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a91954960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:17.761375+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1310720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:18.761593+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:19.762558+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:20.762806+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:21.763129+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:22.763599+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:23.764088+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948775 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:24.764500+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:25.764910+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:26.765227+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1302528 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:27.765631+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.264195442s of 13.267696381s, submitted: 1
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:28.765942+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948907 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:29.766120+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1286144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:30.766519+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1261568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:31.766774+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca70000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1245184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90539c00 session 0x562a91974000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:32.766909+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1245184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:33.767126+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950435 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1155072 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:34.767274+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 974848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:35.767427+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:36.767612+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:37.767846+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:38.768007+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950419 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:39.768221+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:40.768410+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:41.768608+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 1982464 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:42.768797+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 1974272 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.004447937s of 15.682758331s, submitted: 222
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:43.769022+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950551 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1966080 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:44.769265+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1966080 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:45.769446+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:46.769686+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:47.769916+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a8e8d3000 session 0x562a915983c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:48.770091+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951947 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:49.770298+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:50.770518+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:51.770683+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1933312 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:52.770840+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:53.770989+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951947 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 1925120 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:54.771185+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:55.771367+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:56.771529+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:57.771685+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a903ce000 session 0x562a917fd2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:58.771932+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.269571304s of 15.308749199s, submitted: 11
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952063 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:59.772093+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:00.772291+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:01.772521+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:02.772674+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:03.772815+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953459 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:04.772984+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:05.773119+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:06.773361+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:07.773636+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1867776 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:08.773792+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953459 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.677397728s of 10.700782776s, submitted: 6
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:09.773969+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:10.774159+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1859584 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:11.774381+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:12.774543+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:13.774709+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952868 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1818624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:14.774953+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:15.775159+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:16.775298+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 1802240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:17.775564+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 2080768 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:18.775760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952852 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 2080768 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:19.775951+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 2072576 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:20.776152+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.044954300s of 12.094819069s, submitted: 14
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 2048000 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:21.776308+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 2048000 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:22.776481+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:23.776624+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:24.776764+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:25.776928+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:26.777070+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:27.777292+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 2031616 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:28.777570+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:29.777709+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:30.777866+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:31.778031+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:32.778196+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:33.778454+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:34.778613+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:35.778750+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:36.778918+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:37.779139+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:38.779266+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:39.779419+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:40.779562+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:41.779766+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:42.780017+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:43.780175+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:44.780397+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:45.780558+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 2015232 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:46.780725+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:47.780906+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:48.781047+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:49.781289+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:50.781481+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:51.781675+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:52.781901+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:53.782038+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:54.782178+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:55.782365+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:56.782548+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:57.782773+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:58.782910+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:59.783127+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:00.783255+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:01.783484+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:02.783632+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:03.783791+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:04.784090+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:05.784318+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:06.784589+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:07.784784+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:08.784951+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:09.785098+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:10.785241+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:11.785446+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:12.785633+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:13.785797+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:14.786049+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:15.786223+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:16.786380+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:17.786666+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:18.786857+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:19.787022+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:20.787156+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:21.787400+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a90861400 session 0x562a914d1860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:22.787719+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:23.787948+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:24.788484+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:25.788619+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:26.788756+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:27.788942+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:28.789132+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952129 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:29.789283+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:30.789428+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:31.789589+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:32.789714+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 71.986579895s of 71.992851257s, submitted: 2
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:33.789842+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952261 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:34.789955+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:35.790096+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:36.790247+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:37.790511+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:38.790691+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 2023424 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:39.790896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:40.791084+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 2007040 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:41.791229+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:42.791416+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:43.791563+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:44.791712+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 1998848 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:45.791836+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:46.791985+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:47.792211+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:48.792388+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.808482170s of 15.905633926s, submitted: 12
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:49.792557+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955001 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:50.792760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a907cbc00 session 0x562a918530e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:51.792946+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:52.793251+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:53.793413+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:54.793618+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955153 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:55.793797+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:56.793964+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:57.794160+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:58.794475+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:59.794617+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955153 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:00.794807+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:01.795559+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 1990656 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acf800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.572267532s of 12.576242447s, submitted: 1
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:02.795970+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:03.796089+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:04.796502+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955285 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:05.796741+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:06.796871+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:07.797077+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1957888 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:08.797261+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:09.797395+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955301 data_alloc: 218103808 data_used: 135168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:10.797545+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:11.797677+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:12.797805+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:13.797963+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:14.798109+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954103 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:15.798270+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:16.798386+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1949696 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:17.798585+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.693298340s of 15.739171028s, submitted: 12
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:18.798713+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:19.798854+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953971 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1908736 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acfc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:20.799041+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0xf868d/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 ms_handle_reset con 0x562a91acf800 session 0x562a907f7c20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 1892352 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:21.799214+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 1884160 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _renew_subs
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:22.799376+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 18440192 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 146 ms_handle_reset con 0x562a91acfc00 session 0x562a900f9a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:23.799551+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd6ea17/0xe28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18292736 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _renew_subs
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb9e3000/0x0/0x4ffc00000, data 0xd6ea17/0xe28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:24.799726+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192488 data_alloc: 218103808 data_used: 143360
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 18251776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 147 ms_handle_reset con 0x562a8e8d3000 session 0x562a8db550e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:25.799910+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 18251776 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fa56e000/0x0/0x4ffc00000, data 0x21e09f9/0x229c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:26.800061+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:27.800251+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:28.800441+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:29.800615+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194146 data_alloc: 218103808 data_used: 143360
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:30.800811+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:31.800976+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.001250267s of 14.248284340s, submitted: 57
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:32.801162+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:33.801378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 18235392 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:34.803951+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193438 data_alloc: 218103808 data_used: 143360
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 18227200 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:35.804142+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 18227200 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:36.804307+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 18219008 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:37.804507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:38.804641+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:39.804812+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193454 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:40.804976+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 18202624 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:41.805115+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:42.805255+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:43.805420+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:44.805559+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193454 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:45.805734+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18194432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:46.805884+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.964175224s of 15.001649857s, submitted: 10
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 18186240 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:47.806061+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 18186240 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:48.806228+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:49.806480+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193154 data_alloc: 218103808 data_used: 139264
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:50.806608+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:51.806749+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:52.806909+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:53.807226+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18178048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:54.807378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193306 data_alloc: 218103808 data_used: 143360
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:55.807590+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:56.807732+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:57.807924+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:58.808181+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:59.808400+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193306 data_alloc: 218103808 data_used: 143360
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:00.808537+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:01.808720+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:02.808883+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 18169856 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a907cbc00 session 0x562a91973c20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a90861400 session 0x562a91852b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e768000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a8e768000 session 0x562a90934000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:03.809069+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a904a4000 session 0x562a9159fa40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 18161664 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f645a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:04.809253+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a907cbc00 session 0x562a9086fa40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223858 data_alloc: 234881024 data_used: 11616256
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 6692864 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 ms_handle_reset con 0x562a90861400 session 0x562a90991a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91acfc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:05.809401+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.849899292s of 18.852664948s, submitted: 1
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa56d000/0x0/0x4ffc00000, data 0x21e2ae5/0x229f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 96051200 unmapped: 6676480 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:06.809548+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a91acfc00 session 0x562a9152fc20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 5562368 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a8e8d3000 session 0x562a91ae34a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a904a4000 session 0x562a9076a780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a907cbc00 session 0x562a915d5e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 ms_handle_reset con 0x562a90861400 session 0x562a91975860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:07.809735+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 5480448 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:08.809909+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9f6c000/0x0/0x4ffc00000, data 0x27ded21/0x289e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 5447680 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:09.810235+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283676 data_alloc: 234881024 data_used: 11616256
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:10.810407+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:11.810605+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:12.810934+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:13.811099+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97304576 unmapped: 5423104 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x27e0cf3/0x28a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:14.811261+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120cc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a907f70e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287641 data_alloc: 234881024 data_used: 11616256
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 5079040 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:15.811429+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 5079040 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:16.811578+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1777664 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:17.811800+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.755753517s of 11.909161568s, submitted: 48
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:18.811998+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:19.812177+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331825 data_alloc: 234881024 data_used: 17592320
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 753664 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:20.812365+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:21.812601+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:22.813249+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:23.813498+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:24.813650+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331825 data_alloc: 234881024 data_used: 17592320
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 737280 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:25.813859+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 729088 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:26.814046+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 729088 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x2804d16/0x28c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:27.814213+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.845973015s of 10.017697334s, submitted: 63
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 4096000 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:28.814364+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:29.814545+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406217 data_alloc: 234881024 data_used: 18051072
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:30.814674+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 4153344 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8591000/0x0/0x4ffc00000, data 0x3011d16/0x30d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:31.814841+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 4136960 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:32.814984+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8591000/0x0/0x4ffc00000, data 0x3011d16/0x30d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4104192 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:33.815175+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 4104192 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:34.815286+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400737 data_alloc: 234881024 data_used: 18055168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:35.815463+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:36.815636+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:37.815814+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:38.816119+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x3035d16/0x30f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:39.816372+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400737 data_alloc: 234881024 data_used: 18055168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:40.816520+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 5275648 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514300346s of 13.687804222s, submitted: 45
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:41.816666+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:42.816806+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:43.816968+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f856b000/0x0/0x4ffc00000, data 0x303fd16/0x3101000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:44.817108+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400729 data_alloc: 234881024 data_used: 18055168
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:45.817301+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:46.817469+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f856b000/0x0/0x4ffc00000, data 0x303fd16/0x3101000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:47.817693+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 5185536 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:48.817830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36c00 session 0x562a9190af00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36800 session 0x562a909343c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b36400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b36400 session 0x562a91045a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108019712 unmapped: 5193728 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a917fdc20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:49.817999+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c400 session 0x562a91995860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400917 data_alloc: 234881024 data_used: 18579456
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8568000/0x0/0x4ffc00000, data 0x3042d16/0x3104000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 3784704 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91663e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:50.818119+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a916623c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33000 session 0x562a9076be00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:51.818239+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:52.819669+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9190b2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:53.819830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:54.819957+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413158 data_alloc: 234881024 data_used: 18579456
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:55.820138+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 3268608 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:56.820341+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:57.820494+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:58.820642+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 3252224 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849f000/0x0/0x4ffc00000, data 0x310ad78/0x31cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.070981979s of 18.176597595s, submitted: 30
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:59.820759+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412966 data_alloc: 234881024 data_used: 18579456
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:00.820997+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:01.821158+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849c000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:02.821314+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 3653632 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:03.821469+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 3211264 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:04.821646+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849c000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416818 data_alloc: 234881024 data_used: 19197952
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:05.821818+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:06.822111+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849e000/0x0/0x4ffc00000, data 0x310bd78/0x31ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:07.822346+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:08.822515+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:09.822679+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.322134018s of 10.349461555s, submitted: 10
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416834 data_alloc: 234881024 data_used: 19193856
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:10.822824+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:11.822979+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f849b000/0x0/0x4ffc00000, data 0x310ed78/0x31d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 3186688 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:12.823110+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 3170304 heap: 113213440 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:13.823311+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110526464 unmapped: 3735552 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:14.823521+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443015 data_alloc: 234881024 data_used: 19501056
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 3612672 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:15.823780+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 3612672 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:16.823932+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810f000/0x0/0x4ffc00000, data 0x349ad78/0x355d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:17.824228+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:18.824425+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:19.824593+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450314 data_alloc: 234881024 data_used: 19574784
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 3579904 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:20.824750+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.018276215s of 11.213406563s, submitted: 61
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:21.824887+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f80ee000/0x0/0x4ffc00000, data 0x34bbd78/0x357e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:22.825026+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 3448832 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:23.825190+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32c00 session 0x562a9086f0e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9086e1e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:24.825392+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f855b000/0x0/0x4ffc00000, data 0x304cd16/0x310e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405668 data_alloc: 234881024 data_used: 18579456
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:25.825692+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:26.825898+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:27.826127+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:28.826292+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:29.826514+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a9197ef00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a91974b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405668 data_alloc: 234881024 data_used: 18579456
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 4087808 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:30.826635+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91973e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:31.826851+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:32.827032+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:33.827199+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:34.827347+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:35.827557+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:36.827769+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:37.827988+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:38.828269+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:39.828501+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:40.828725+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:41.828895+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:42.829058+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:43.829249+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:44.829387+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:45.829538+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:46.829650+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:47.829794+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:48.829916+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:49.830172+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:50.830371+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 6889472 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:51.830587+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:52.830737+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:53.830894+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:54.831061+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251086 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 6881280 heap: 114262016 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:55.831185+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.818191528s of 34.928936005s, submitted: 42
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33000 session 0x562a90f943c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a915d4960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a910454a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a916625a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a914d1a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:56.831365+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:57.831521+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:58.831670+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:59.831896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120cc00 session 0x562a919952c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287964 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:00.832075+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:01.832212+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:02.832379+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x2589d45/0x264a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:03.832534+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a919554a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9152f860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:04.832744+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a8e1bfc20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a91930d20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289778 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:05.832985+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:06.833136+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:07.833405+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 10698752 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:08.834002+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:09.834414+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306222 data_alloc: 234881024 data_used: 14807040
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:10.834716+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.633553505s of 14.775516510s, submitted: 42
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:11.835086+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:12.835393+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:13.835676+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 10633216 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9021000/0x0/0x4ffc00000, data 0x2589d54/0x264b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:14.835985+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306202 data_alloc: 234881024 data_used: 14802944
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:15.836118+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:16.836246+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 9576448 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:17.836415+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 8740864 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:18.836672+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 9183232 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bf4000/0x0/0x4ffc00000, data 0x29b6d54/0x2a78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:19.837230+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356540 data_alloc: 234881024 data_used: 15572992
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:20.837527+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:21.837727+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:22.837862+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:23.838091+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:24.838288+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.645743370s of 13.833170891s, submitted: 62
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 8994816 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:25.838469+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:26.838693+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:27.838932+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:28.839192+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 8986624 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:29.839491+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 8978432 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:30.839689+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:31.839912+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:32.840066+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:33.840248+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:34.840426+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356712 data_alloc: 234881024 data_used: 15581184
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:35.840638+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 8970240 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32000 session 0x562a91994000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:36.840830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 8962048 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:37.841035+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 8962048 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:38.841193+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 8953856 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:39.841441+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357928 data_alloc: 234881024 data_used: 15659008
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:40.841646+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bde000/0x0/0x4ffc00000, data 0x29cbd54/0x2a8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:41.841860+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:42.842044+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:43.842195+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:44.842361+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 8937472 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.593759537s of 20.597139359s, submitted: 1
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a91844780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a919734a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355416 data_alloc: 234881024 data_used: 15663104
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:45.842491+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 9920512 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90944b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c2000/0x0/0x4ffc00000, data 0x21e8cf2/0x22a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:46.842644+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 9920512 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:47.842817+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:48.842969+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:49.843134+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:50.843283+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:51.843440+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:52.843569+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:53.843722+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:54.843915+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:55.844060+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.477752686s of 10.626935005s, submitted: 46
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:56.844223+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:57.844490+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:58.844620+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:59.844775+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262837 data_alloc: 234881024 data_used: 12136448
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:00.844906+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:01.845064+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:02.845231+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:03.845387+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:04.845534+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:05.845680+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262537 data_alloc: 234881024 data_used: 12136448
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:06.845832+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:07.846036+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:08.846176+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a9159f2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:09.846378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:10.846516+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262689 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:11.846661+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 9904128 heap: 117497856 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a8e6645a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a915990e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a9086e3c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a916632c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.502140045s of 16.509193420s, submitted: 2
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:12.847507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a919941e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b33400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b33400 session 0x562a8daeb4a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8daea5a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a90fdc3c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a8e664f00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:13.848083+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:14.848405+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:15.849003+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323294 data_alloc: 234881024 data_used: 12140544
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a900f9e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:16.849687+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91501800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91501800 session 0x562a8f39a780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:17.850585+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90fde960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a90fdde00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:18.850762+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:19.851036+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 21610496 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:20.851498+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327094 data_alloc: 234881024 data_used: 12079104
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 21651456 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:21.851895+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:22.852128+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:23.852278+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:24.852676+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:25.853183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376342 data_alloc: 234881024 data_used: 19419136
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:26.853486+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c3a000/0x0/0x4ffc00000, data 0x2970d55/0x2a32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:27.853876+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:28.854083+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:29.854226+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:30.854406+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376342 data_alloc: 234881024 data_used: 19419136
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18145280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.519599915s of 18.695911407s, submitted: 27
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:31.854654+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 15073280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8955000/0x0/0x4ffc00000, data 0x2c4fd55/0x2d11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:32.854952+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:33.855108+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:34.855269+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:35.855430+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 14049280 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:36.855595+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:37.855765+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:38.855896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a903ce000 session 0x562a911ea1e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:39.856034+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 14041088 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:40.856160+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:41.856306+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:42.856503+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:43.856675+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:44.856856+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:45.857031+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406194 data_alloc: 234881024 data_used: 19533824
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 14032896 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:46.857132+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 14016512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:47.857309+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 14016512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:48.857519+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a8e800960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91237400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91237400 session 0x562a918a23c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 13541376 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:49.857647+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.576541901s of 18.732076645s, submitted: 63
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8939000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a918a2b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a903ce000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:50.857835+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430056 data_alloc: 234881024 data_used: 19537920
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:51.858034+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 13475840 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:52.858479+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120d000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120d000 session 0x562a900ad860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 13443072 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:53.858584+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 13443072 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91237400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91237400 session 0x562a90fdd860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:54.858730+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32800 session 0x562a8f45a1e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85a3000/0x0/0x4ffc00000, data 0x3007d55/0x30c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9159f680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13123584 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:55.858868+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435746 data_alloc: 234881024 data_used: 19537920
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13123584 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:56.859031+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 11182080 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:57.859166+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 11059200 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:58.859275+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 11059200 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:59.859417+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:00.859634+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454290 data_alloc: 234881024 data_used: 22233088
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:01.859796+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:02.859957+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f857e000/0x0/0x4ffc00000, data 0x302bd65/0x30ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:03.860069+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 11051008 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a9159e780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:04.860177+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 11042816 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:05.860315+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454290 data_alloc: 234881024 data_used: 22233088
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 11042816 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:06.860443+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.009700775s of 17.099123001s, submitted: 14
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 11681792 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:07.860607+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 11264000 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:08.860744+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:09.860949+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:10.861081+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504442 data_alloc: 234881024 data_used: 22503424
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:11.861263+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 11214848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:12.861399+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 11206656 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:13.861517+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 11173888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35f5d65/0x36b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9152f2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9197f2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:14.861665+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90ed1a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907cbc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:15.861800+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407567 data_alloc: 234881024 data_used: 19537920
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:16.862068+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:17.862461+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.225418091s of 11.423836708s, submitted: 57
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:18.862599+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 12238848 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:19.862732+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:20.862938+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407567 data_alloc: 234881024 data_used: 19537920
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:21.863110+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 12320768 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:22.863402+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a900f9680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9509 writes, 36K keys, 9509 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9509 writes, 2350 syncs, 4.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1762 writes, 5246 keys, 1762 commit groups, 1.0 writes per commit group, ingest: 4.67 MB, 0.01 MB/s
                                           Interval WAL: 1762 writes, 786 syncs, 2.24 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 12353536 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8947000/0x0/0x4ffc00000, data 0x2c63d55/0x2d25000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:23.863622+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a91931860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:24.863770+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:25.864081+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278221 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:26.864305+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:27.864645+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:28.864814+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:29.865239+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:30.865582+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278221 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 17932288 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:31.866005+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.437479019s of 13.498319626s, submitted: 21
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:32.866265+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:33.866477+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:34.866687+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:35.867000+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:36.867144+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:37.867378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:38.867562+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:39.867854+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:40.868124+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:41.868419+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:42.868600+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:43.868767+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:44.868905+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:45.869120+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278089 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:46.869283+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8f7d000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:47.869541+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:48.869705+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 16875520 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:49.869840+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.384731293s of 18.389318466s, submitted: 1
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a9086f2c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 18341888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:50.870012+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308481 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 18341888 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:51.870217+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:52.870422+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:53.870583+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:54.870758+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:55.870965+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308481 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 18325504 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:56.871131+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 18317312 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:57.871389+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 18317312 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:58.871602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:59.871795+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:00.871956+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318361 data_alloc: 234881024 data_used: 12898304
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:01.872156+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:02.872350+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:03.872549+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:04.872821+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 18112512 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:05.873002+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318361 data_alloc: 234881024 data_used: 12898304
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:06.873180+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:07.873398+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9135000/0x0/0x4ffc00000, data 0x2477ce3/0x2537000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 18104320 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:08.873550+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.978237152s of 19.004953384s, submitted: 15
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115548160 unmapped: 14548992 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:09.873680+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c36000/0x0/0x4ffc00000, data 0x2975ce3/0x2a35000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 16457728 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:10.873830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361853 data_alloc: 234881024 data_used: 13336576
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:11.873981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:12.874131+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:13.874279+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:14.874370+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:15.874522+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361853 data_alloc: 234881024 data_used: 13336576
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:16.874711+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:17.874873+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:18.875001+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:19.875163+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:20.875307+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361869 data_alloc: 234881024 data_used: 13336576
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:21.875521+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 15990784 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:22.875705+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:23.875861+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:24.876226+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:25.876366+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361869 data_alloc: 234881024 data_used: 13336576
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:26.876572+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:27.876802+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 15982592 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:28.877037+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:29.877187+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:30.877346+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362021 data_alloc: 234881024 data_used: 13340672
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:31.877600+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:32.877819+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:33.877991+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:34.878150+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 15974400 heap: 130097152 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:35.878366+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.573186874s of 26.738054276s, submitted: 88
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a91044000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433977 data_alloc: 234881024 data_used: 13340672
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 27410432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:36.878576+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:37.878776+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810d000/0x0/0x4ffc00000, data 0x349fce3/0x355f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:38.878921+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91b32400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a91b32400 session 0x562a90fdc1e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:39.879096+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a8db54b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:40.879243+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a907f74a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9034da40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437092 data_alloc: 234881024 data_used: 13340672
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:41.879386+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 27377664 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:42.879630+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 27361280 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:43.879806+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:44.880016+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:45.880149+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508276 data_alloc: 234881024 data_used: 23834624
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:46.880272+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:47.880415+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:48.880533+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:49.880653+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:50.880785+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508276 data_alloc: 234881024 data_used: 23834624
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:51.880933+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 21168128 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f810b000/0x0/0x4ffc00000, data 0x349fd16/0x3561000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:52.881075+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 21209088 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.664892197s of 16.756015778s, submitted: 18
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:53.881195+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124370944 unmapped: 17276928 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:54.881395+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:55.881568+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:56.881717+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:57.881941+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:58.882078+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 17203200 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:59.882216+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:00.882473+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:01.882622+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:02.882753+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:03.882910+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:04.883065+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 17186816 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:05.883226+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 17178624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:06.883417+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 17178624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:07.883656+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 17170432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:08.883840+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 17170432 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:09.884095+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:10.884267+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c8000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571238 data_alloc: 234881024 data_used: 24596480
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:11.884482+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 17162240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.341291428s of 19.435050964s, submitted: 39
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:12.884608+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 17727488 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:13.884890+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:14.885027+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:15.885227+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:16.885415+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:17.885610+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:18.885772+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:19.885924+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:20.886060+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:21.886196+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:22.886349+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 17719296 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:23.886493+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:24.886830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:25.887633+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907ca400 session 0x562a8e275a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a91abfc00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907ca000 session 0x562a917fc960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a907ca400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:26.887783+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:27.888063+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a907cb400 session 0x562a917fc000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e768400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:28.888190+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:29.888414+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 17711104 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a8f408780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e769c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:30.888607+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571222 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:31.888799+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:32.888963+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 17702912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:33.889181+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.640014648s of 21.650600433s, submitted: 14
Oct 10 10:30:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 17694720 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:34.889314+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 19070976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721446131' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:35.889511+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122757120 unmapped: 18890752 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568870 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:36.889642+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122798080 unmapped: 18849792 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:37.889838+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:38.890214+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:39.890397+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3be1d16/0x3ca3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:40.890531+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 18841600 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90ed10e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a90716960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568870 data_alloc: 234881024 data_used: 24580096
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:41.890692+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a915d5680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:42.890825+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:43.891058+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:44.891281+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:45.891506+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366479 data_alloc: 234881024 data_used: 13328384
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:46.891688+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x298bce3/0x2a4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:47.891873+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.427377701s of 14.190353394s, submitted: 253
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8e6650e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:48.892017+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 25518080 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a914d1c20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:49.892210+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:50.892420+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:51.892568+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:52.892699+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:53.892871+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:54.893043+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:55.893228+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:56.893396+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:57.893566+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:58.893774+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:59.894018+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:00.894173+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:01.894370+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:02.894521+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:03.894716+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:04.894852+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:05.895051+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:06.895232+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:07.895434+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:08.895623+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:09.895800+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:10.895989+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:11.896174+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293499 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:12.896296+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:13.896420+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:14.896557+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:15.896689+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 26394624 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.166027069s of 28.204250336s, submitted: 11
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f043860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:16.896887+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305609 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:17.897106+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:18.897246+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 26206208 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:19.897529+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a8ef3ad20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 27049984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90861400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x2302d06/0x23c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:20.897689+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114597888 unmapped: 27049984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:21.898083+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307030 data_alloc: 234881024 data_used: 11579392
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:22.898194+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:23.898342+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:24.898488+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x2302d06/0x23c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:25.898674+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 26869760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90861400 session 0x562a90ed0000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:26.898805+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.468525887s of 10.493452072s, submitted: 11
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297160 data_alloc: 234881024 data_used: 11554816
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f943c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:27.899002+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:28.899146+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:29.899287+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:30.899508+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:31.900029+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296604 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:32.900534+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:33.901054+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:34.901600+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:35.901764+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:36.902310+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296604 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:37.902663+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 26861568 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:38.902907+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 26853376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.552070618s of 12.580025673s, submitted: 13
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90fdf4a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:39.903119+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:40.903312+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:41.903673+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368766 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:42.904246+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 26238976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:43.904487+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 26230784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:44.904785+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 26230784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a910441e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:45.905023+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a79ce3/0x2b39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 26591232 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:46.905237+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370402 data_alloc: 234881024 data_used: 11554816
Oct 10 10:30:49 compute-0 ceph-osd[81941]: mgrc ms_handle_reset ms_handle_reset con 0x562a8db47400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 10:30:49 compute-0 ceph-osd[81941]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: get_auth_request con 0x562a90861400 auth_method 0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: mgrc handle_mgr_configure stats_period=5
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 26451968 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:47.905519+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 26451968 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:48.905649+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:49.905825+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b0f000/0x0/0x4ffc00000, data 0x2a9dce3/0x2b5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:50.906000+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b0f000/0x0/0x4ffc00000, data 0x2a9dce3/0x2b5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:51.906209+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422690 data_alloc: 234881024 data_used: 19124224
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:52.906446+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8e1bfe00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8ef27e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24625152 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.924575806s of 13.989303589s, submitted: 22
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8f45b680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:53.906621+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:54.906777+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:55.907010+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:56.907183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:57.907388+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:58.907525+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:59.907690+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:00.907829+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:01.907982+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:02.908135+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:03.908296+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:04.908448+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:05.908615+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:06.908819+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:07.908991+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:08.909149+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:09.909350+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:10.909554+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:11.909699+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304136 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:12.909971+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:13.910129+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f93c4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:14.910242+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:15.910371+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 27852800 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.160041809s of 23.180587769s, submitted: 10
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed0780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:16.910536+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343576 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:17.910707+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:18.910839+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:19.910991+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec1000/0x0/0x4ffc00000, data 0x26ebce3/0x27ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:20.911150+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:21.911312+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343576 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:22.911507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:23.911651+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec1000/0x0/0x4ffc00000, data 0x26ebce3/0x27ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:24.911873+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 27820032 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a8e274b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:25.912043+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90860c00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 27803648 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:26.912173+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346674 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 27779072 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:27.912404+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:28.912599+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:29.912740+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:30.912907+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:31.913038+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381786 data_alloc: 234881024 data_used: 16793600
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:32.913199+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:33.913363+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:34.913507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x26ebd06/0x27ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:35.914096+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 26304512 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:36.914225+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382242 data_alloc: 234881024 data_used: 16805888
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.772865295s of 20.838567734s, submitted: 15
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 25526272 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:37.943005+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c48000/0x0/0x4ffc00000, data 0x2963d06/0x2a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 24928256 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:38.943153+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:39.943278+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:40.943490+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:41.943654+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:42.943819+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:43.943977+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:44.944114+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:45.944277+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:46.944448+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:47.945027+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:48.945238+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:49.945451+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8832000/0x0/0x4ffc00000, data 0x2969d06/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:50.945670+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 24854528 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:51.945868+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406652 data_alloc: 234881024 data_used: 17154048
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8ef4a780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.981020927s of 15.057462692s, submitted: 16
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90860c00 session 0x562a919943c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 24870912 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:52.946025+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9197f860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:53.946160+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:54.946283+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:55.946511+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:56.946630+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:57.946867+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:58.947090+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:59.947260+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:00.947409+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:01.947596+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:02.947740+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:03.947896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:04.948028+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:05.948229+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:06.948417+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:07.948662+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:08.948837+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:09.949045+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:10.949400+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:11.949485+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:12.949602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:13.949737+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:14.949895+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:15.950073+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:16.950188+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311130 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 27402240 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:17.950364+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.378128052s of 25.476999283s, submitted: 31
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a9086f0e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:18.950460+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:19.950569+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e38000/0x0/0x4ffc00000, data 0x2364ce3/0x2424000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 28278784 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:20.950761+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a90934780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9084a3c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 28270592 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:21.950907+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9089c000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9089c000 session 0x562a919752c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335038 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90f64b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 28286976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:22.950981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 28286976 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:23.951111+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:24.951437+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:25.951605+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:26.951782+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347681 data_alloc: 234881024 data_used: 12922880
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:27.951997+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:28.952141+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:29.952269+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 28237824 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:30.952390+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:31.952544+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347681 data_alloc: 234881024 data_used: 12922880
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:32.952670+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 28229632 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:33.952845+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.913131714s of 15.963829994s, submitted: 17
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e12000/0x0/0x4ffc00000, data 0x2388d16/0x244a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 21536768 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:34.953012+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 22929408 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a918a3a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8e7d50e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:35.953128+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a91598d20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e4c2400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e4c2400 session 0x562a91ae2780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed05a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a9159e780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a9159e5a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a8f408b40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e4c3400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e4c3400 session 0x562a8f4085a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 21749760 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:36.953276+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455274 data_alloc: 234881024 data_used: 13545472
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 21733376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:37.953478+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 21733376 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:38.953624+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f4094a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 21725184 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8f408960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:39.953822+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 21700608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:40.954721+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 21692416 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:41.955405+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a9076b4a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a9076af00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455290 data_alloc: 234881024 data_used: 13545472
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 21692416 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f49000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:42.955699+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 119963648 unmapped: 21684224 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:43.955833+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 18702336 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:44.955957+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17358848 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:45.956213+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:46.956384+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512442 data_alloc: 234881024 data_used: 22044672
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 17317888 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:47.956586+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17309696 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:48.956711+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17309696 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:49.956843+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8291000/0x0/0x4ffc00000, data 0x2effd87/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124370944 unmapped: 17276928 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:50.956981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 17358848 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:51.957183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512442 data_alloc: 234881024 data_used: 22044672
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:52.957411+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 17350656 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:53.957888+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.338996887s of 20.593862534s, submitted: 102
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 11083776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:54.958378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 10895360 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:55.958642+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7821000/0x0/0x4ffc00000, data 0x3977d87/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:56.958922+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596236 data_alloc: 234881024 data_used: 23085056
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:57.959125+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:58.959265+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:59.959602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:00.959805+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:01.960059+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596236 data_alloc: 234881024 data_used: 23085056
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:02.960254+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:03.960550+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:04.960678+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.052834511s of 11.249748230s, submitted: 88
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:05.960844+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:06.961026+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1595476 data_alloc: 234881024 data_used: 23089152
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:07.961239+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 12124160 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:08.961481+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:09.961790+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:10.961949+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:11.962082+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596388 data_alloc: 234881024 data_used: 23158784
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:12.962245+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:13.962413+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f7817000/0x0/0x4ffc00000, data 0x3981d87/0x3a45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:14.962575+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 12107776 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f49000 session 0x562a9076b680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9086ed20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:15.962731+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f84bf000/0x0/0x4ffc00000, data 0x26e0d16/0x27a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:16.962825+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392317 data_alloc: 234881024 data_used: 13545472
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:17.962989+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f84bf000/0x0/0x4ffc00000, data 0x26e0d16/0x27a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:18.963178+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:19.963381+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 18522112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a8f043860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90f952c0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.779786110s of 14.850324631s, submitted: 25
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a9190ba40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:20.963525+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:21.963634+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:22.963760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:23.963939+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:24.964048+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:25.964183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:26.964373+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 19562496 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:27.964559+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:28.964682+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:29.965016+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:30.965166+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:31.965409+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:32.965567+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:33.965779+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:34.965946+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:35.966071+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:36.966205+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:37.966416+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:38.966621+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:39.966773+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:40.966963+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:41.967103+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331692 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:42.967389+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:43.967602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:44.968407+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:45.969467+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 19546112 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90537800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90537800 session 0x562a91045a40
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a90ed0d20
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a91995860
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a8f408780
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:46.969611+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.354364395s of 26.400856018s, submitted: 19
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a900ac5a0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364392 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:47.970235+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:48.970411+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:49.971041+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:50.971211+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:51.971363+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f49000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f49000 session 0x562a9197e000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364392 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:52.991228+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19529728 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a9084a1e0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8bb1000/0x0/0x4ffc00000, data 0x25ebce3/0x26ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a904a4000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a904a4000 session 0x562a90f94f00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:53.991399+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90536000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90536000 session 0x562a90f94000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 19349504 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a90f18400
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:54.991591+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 19349504 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a9120c800
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:55.991721+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:56.991912+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398511 data_alloc: 234881024 data_used: 15749120
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:57.992085+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:58.992266+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:59.992524+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:00.992668+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:01.992825+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398511 data_alloc: 234881024 data_used: 15749120
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:02.992969+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:03.993136+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:04.993256+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 19652608 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.759384155s of 18.817558289s, submitted: 11
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:05.993379+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8b8b000/0x0/0x4ffc00000, data 0x260fd16/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:06.993633+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448577 data_alloc: 234881024 data_used: 15872000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:07.993959+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:08.994094+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:09.994253+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:10.994466+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:11.994656+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448577 data_alloc: 234881024 data_used: 15872000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:12.994837+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:13.995057+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:14.995263+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:15.995378+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x2be9d16/0x2cab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 16506880 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a9120c800 session 0x562a918a2960
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a90f18400 session 0x562a91955e00
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:16.995507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: handle_auth_request added challenge on 0x562a8e8d3000
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.116869926s of 11.250986099s, submitted: 27
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 18972672 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 ms_handle_reset con 0x562a8e8d3000 session 0x562a8f45b680
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:17.995896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:18.996031+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:19.996205+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:20.996311+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:21.996524+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:22.996644+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:23.996818+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:24.996960+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:25.997077+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:26.997213+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:27.997412+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:28.997577+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:29.997720+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:30.997885+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:31.997994+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:32.998143+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:33.998309+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:34.998514+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:35.998635+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:36.998862+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:37.999111+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:38.999266+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:39.999412+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 18964480 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:40.999553+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:41.999782+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:42.999961+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:44.000140+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:45.000291+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:46.000503+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:47.000749+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:48.000995+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 18956288 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:49.001453+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:50.001620+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:51.001750+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:52.001914+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:53.002082+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:54.002204+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:55.002387+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:56.002517+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:57.002666+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:58.002828+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:59.002963+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:00.003072+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:01.003198+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:02.003365+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122699776 unmapped: 18948096 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:03.003516+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:04.003703+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:05.003823+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:06.004008+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:07.004116+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:08.004257+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:09.004399+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:10.004528+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:11.004635+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:12.004763+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:13.004872+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:14.005017+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:15.005162+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:16.005304+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:17.005483+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:18.005633+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:19.005748+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:20.005862+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:21.006078+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:22.006209+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:23.006375+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:24.006492+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 18931712 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:25.006757+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:26.006885+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:27.006994+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18923520 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:28.007138+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 18857984 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:29.007262+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 18939904 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:30.007360+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 19054592 heap: 141647872 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:31.007475+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 133636096 unmapped: 19054592 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf dump' '{prefix=perf dump}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:32.007594+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf schema' '{prefix=perf schema}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 29818880 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:33.007711+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:34.007830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:35.007930+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:36.008013+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:37.008141+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:38.008282+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:39.008368+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:40.008492+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 29810688 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:41.008638+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:42.008774+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:43.008882+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:44.008986+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:45.009116+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:46.009253+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:47.009379+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:48.009518+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:49.009672+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:50.009828+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:51.009944+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 29802496 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:52.010106+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:53.010233+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:54.010354+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:55.010473+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:56.010580+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:57.010696+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:58.010822+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 29794304 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:59.010954+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:00.011059+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:01.011246+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:02.011494+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:03.011590+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:04.011750+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:05.011905+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:06.012041+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 29786112 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:07.012194+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:08.012765+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:09.012892+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:10.013008+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:11.013153+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:12.013277+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:13.013410+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:14.013536+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:15.013644+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:16.013764+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:17.013905+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:18.014129+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:19.014239+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 29777920 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:20.014368+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:21.014498+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:22.014635+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:23.014754+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:24.014891+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:25.015049+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:26.015181+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:27.015347+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:28.015519+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:29.015799+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 29769728 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:30.015953+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:31.016111+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:32.016294+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:33.016446+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:34.016604+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:35.016720+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:36.016913+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:37.017059+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 29761536 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:38.017278+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:39.017450+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:40.017651+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:41.017799+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:42.017914+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:43.018026+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:44.018156+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:45.018290+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:46.018430+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 29753344 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:47.018602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:48.018784+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:49.018894+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:50.019011+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:51.019146+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:52.019354+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:53.019504+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:54.019618+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:55.019760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:56.019916+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 29745152 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:57.020210+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:58.020671+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:59.021114+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:00.021371+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:01.021993+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:02.022402+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:03.022673+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:04.022975+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:05.023358+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 29736960 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:06.023689+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:07.024031+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:08.024289+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:09.024460+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:10.024578+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:11.024710+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:12.025411+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:13.025605+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:14.025755+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:15.025933+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:16.026113+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:17.026267+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 29728768 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:18.026425+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 29720576 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:19.026563+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 29720576 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:20.026697+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 29720576 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:21.026879+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 30105600 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:22.027019+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 30105600 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:23.027208+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 2984 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1517 writes, 5172 keys, 1517 commit groups, 1.0 writes per commit group, ingest: 5.56 MB, 0.01 MB/s
                                           Interval WAL: 1517 writes, 634 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 30105600 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:24.027397+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 30105600 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:25.027510+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:26.027683+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:27.027847+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:28.028026+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:29.028185+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:30.028369+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:31.028488+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:32.028656+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:33.028826+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:34.028978+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:35.029114+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 30097408 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:36.029255+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:37.029425+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:38.029618+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:39.029763+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:40.029897+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:41.030013+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:42.030144+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:43.030305+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:44.030528+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:45.030714+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 30089216 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:46.030870+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:47.031007+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:48.031173+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:49.031394+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:50.031541+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:51.031656+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:52.031976+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 30081024 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:53.032167+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:54.032304+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:55.032507+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:56.032652+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:57.032776+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:58.032971+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:59.033133+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:00.033297+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:01.033450+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 30072832 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:02.033605+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:03.033742+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:04.033885+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:05.034023+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:06.034195+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:07.034312+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:08.034550+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 30261248 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:09.034717+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 30261248 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:10.034829+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 30261248 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:11.034958+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:12.035176+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:13.035312+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:14.035527+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:15.035649+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:16.035778+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:17.036004+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:18.036225+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:19.036337+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:20.036469+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:21.036581+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:22.036692+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 30253056 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:23.036817+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:24.036898+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:25.037046+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:26.037183+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:27.037355+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:28.037493+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:29.037631+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:30.037797+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 30244864 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:31.037941+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:32.038140+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:33.038288+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:34.038409+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:35.038550+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:36.038704+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:37.038848+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:38.039072+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:39.039256+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:40.039416+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:41.039591+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:42.039809+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:43.039982+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 30236672 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:44.040115+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:45.040309+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:46.040538+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:47.040688+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:48.040863+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 234881024 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:49.041017+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:50.041187+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:51.041380+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 30228480 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:52.041551+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:53.041657+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:54.041846+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:55.042009+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:56.042166+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:57.042303+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:58.042486+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:59.042640+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:00.042794+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:01.042930+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122470400 unmapped: 30220288 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:02.043119+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:03.043284+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:04.043460+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:05.043672+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:06.043818+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:07.043979+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:08.044154+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:09.044347+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:10.044485+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:11.044595+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:12.044724+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122478592 unmapped: 30212096 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:13.044874+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:14.045019+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:15.045174+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:16.045357+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:17.045491+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:18.045692+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:19.045849+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:20.045996+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:21.046211+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:22.046382+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 30203904 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:23.046493+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:24.046645+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:25.046782+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:26.046912+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:27.047024+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:28.047377+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 30195712 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:29.047543+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 30187520 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:30.047679+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 30187520 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb3000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:31.047899+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 30187520 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:32.048019+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 30187520 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:33.048214+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 30187520 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:34.048381+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339426 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 317.013183594s of 317.062683105s, submitted: 19
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 30171136 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:35.048580+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122626048 unmapped: 30064640 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:36.048777+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 30629888 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:37.048984+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:38.049207+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:39.049424+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:40.049610+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:41.049820+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:42.050002+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:43.050146+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:44.050380+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:45.050646+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:46.050814+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:47.050942+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:48.051223+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:49.051403+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:50.051546+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:51.051723+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:52.051888+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:53.052064+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:54.052270+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:55.052536+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:56.052775+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:57.052931+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:58.053176+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:59.053368+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:00.053520+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:01.053784+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:02.053952+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:03.054151+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:04.054287+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:05.054432+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:06.054576+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:07.054866+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:08.055100+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:09.055257+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:10.055396+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:11.055554+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:12.055801+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:13.056012+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:14.056163+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:15.056354+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:16.056543+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:17.056758+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:18.056976+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:19.057162+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:20.057375+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:21.057553+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:22.057714+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:23.057929+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:24.058139+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:25.058760+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:26.058902+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:27.059068+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:28.059467+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 30597120 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:29.059603+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:30.060542+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:31.060789+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:32.060929+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:33.061076+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:34.061242+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:35.061374+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:36.061561+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:37.061696+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:38.061889+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 30588928 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:39.062015+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:40.062143+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:41.062274+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:42.062424+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:43.062569+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:44.062702+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:45.062858+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:46.063115+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 30580736 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:47.063261+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:48.063538+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:49.063881+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:50.064106+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:51.064410+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:52.064581+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:53.064795+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:54.064967+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 30572544 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:55.065279+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:56.065466+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:57.065654+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:58.065836+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:59.066093+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:00.066396+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:01.066662+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 30564352 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:02.066904+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:03.067125+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:04.067455+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:05.067708+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:06.067964+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:07.068098+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:08.068262+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:09.068401+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:10.068561+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:11.068774+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:12.069080+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 30556160 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:13.069281+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:14.069434+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:15.070974+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:16.072405+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:17.073657+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:18.074511+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:19.075619+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:20.075951+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:21.076948+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:22.077820+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:23.078640+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:24.079697+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:25.080714+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 30547968 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:26.081607+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:27.082301+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:28.083039+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:29.083257+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:30.083444+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:31.083901+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:32.084298+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:33.084694+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:34.084830+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:35.085010+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 30539776 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:36.085147+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:37.085264+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:38.085558+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:39.085762+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:40.086018+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:41.086147+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:42.086593+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:43.086896+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:44.087193+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:45.087480+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:46.087853+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:47.088028+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:48.088235+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:49.088354+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:50.088482+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:51.088663+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:52.088776+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:53.088944+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:54.089106+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:55.089250+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:56.089445+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:57.089594+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:58.089715+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:59.089858+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:00.090021+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:01.090142+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:02.090271+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:03.090431+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:04.090540+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:05.090701+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:06.090829+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:07.090969+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:08.091130+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:09.091244+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:10.091357+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:11.091462+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:12.091607+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:13.091782+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:14.091904+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:15.092060+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:16.092233+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:17.092385+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:18.092545+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:19.092671+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:20.093013+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:21.093264+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:22.093467+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:23.093639+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets getting new tickets!
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:24.094062+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _finish_auth 0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:24.095353+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:25.094459+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:26.094746+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 30531584 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:27.095004+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:28.095248+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:29.095940+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:30.096494+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:31.096838+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:32.097392+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:33.098398+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:34.098869+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:35.099039+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:36.099213+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:37.099369+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:38.099663+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:39.099792+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:40.099981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:41.100155+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:42.100272+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:43.101449+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:44.101627+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:45.101752+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:46.101867+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:47.102007+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:48.102176+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:49.102376+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:50.102555+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:51.102859+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:52.103089+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:53.103276+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:54.103459+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:55.103596+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:56.103747+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:57.103981+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:58.104143+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:59.104284+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:00.104465+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:01.104602+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:02.104814+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:03.104979+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:04.105140+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:05.105397+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:06.105512+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:07.105700+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:08.105846+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:09.106015+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:10.106192+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:11.106356+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 30523392 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:12.106556+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 30515200 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:13.106682+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 30515200 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:14.106822+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 30515200 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x21e8ce3/0x22a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-0 ceph-osd[81941]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-0 ceph-osd[81941]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339134 data_alloc: 218103808 data_used: 11550720
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:15.106946+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 30515200 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:16.107074+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 30457856 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:17.107179+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 30851072 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: tick
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-0 ceph-osd[81941]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:18.107340+0000)
Oct 10 10:30:49 compute-0 ceph-osd[81941]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 30720000 heap: 152690688 old mem: 2845415832 new mem: 2845415832
Oct 10 10:30:49 compute-0 ceph-osd[81941]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27209 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27227 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.27179 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.27994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.18468 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3444631537' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.27191 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.28015 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3721446131' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.18486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1092835265' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/724274365' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.27209 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2000313604' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3657722192' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:49 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3730549091' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:30:50 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598726768' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 10:30:50 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712051660' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:50 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:50 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:50 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:50.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28099 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:50 compute-0 crontab[301984]: (root) LIST (root)
Oct 10 10:30:50 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18540 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28117 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.18501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.27227 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.28054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.18519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/598726768' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.28075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3542325360' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.18525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3712051660' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2779081294' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/838423744' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27266 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18564 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27281 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 10 10:30:51 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144306905' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18582 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27296 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18603 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.28099 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.18540 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.28117 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3755513474' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.27266 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.18564 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1286859206' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2752456730' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.27281 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4144306905' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.18582 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/251813461' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/821557892' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1210927648' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:52 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27308 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 10:30:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586037795' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:52 compute-0 nova_compute[261329]: 2025-10-10 10:30:52.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 10 10:30:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/865574371' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:52 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000032s ======
Oct 10 10:30:52 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct 10 10:30:52 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 10:30:52 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221259386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:52 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27326 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.27296 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.18603 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2364379109' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3064887336' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.27308 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2197365172' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1586037795' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/865574371' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2672443346' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2319149536' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2221259386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1413997151' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2285275171' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2579466127' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 10:30:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351759873' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 10:30:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094752344' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 nova_compute[261329]: 2025-10-10 10:30:53.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 10:30:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828719708' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 10:30:53 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847432332' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 10:30:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052434041' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 10:30:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827572701' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.27326 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1814715029' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1351759873' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/4094752344' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3898864091' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2844202342' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1182933054' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/112314830' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/4229550321' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1828719708' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/847432332' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4280476709' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1369088152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/853849376' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/29899441' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1052434041' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/827572701' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28288 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:54 compute-0 podman[302415]: 2025-10-10 10:30:54.247027225 +0000 UTC m=+0.085642326 container health_status 8af152a99056e791b5d2289ffce14b418b23b99e924d88ceb73bd5d391794013 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:30:54 compute-0 podman[302416]: 2025-10-10 10:30:54.262403707 +0000 UTC m=+0.094450357 container health_status 8fc7cc306f5ec0294abf533ed167a3cb2edfbe0cddced561685b9ed3f97e2ed1 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:30:54 compute-0 podman[302417]: 2025-10-10 10:30:54.326620074 +0000 UTC m=+0.156667880 container health_status be716a68293413aeb8010505707b8e20a0460f9cec15538d07af9b34a9254efb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:30:54 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:54 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:54 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 10:30:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109351117' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28312 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:54 compute-0 systemd[1]: Starting Hostname Service...
Oct 10 10:30:54 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-0 systemd[1]: Started Hostname Service.
Oct 10 10:30:54 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 10:30:54 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820525157' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 10:30:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466512300' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1625571292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.28288 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2382901546' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/84867561' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2698340361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3675716813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/109351117' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1162105813' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2686277470' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1820525157' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1466512300' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28336 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18726 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 10:30:55 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980240607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28357 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27446 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28372 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:56.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.28312 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.28336 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3951488466' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2884419151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.18726 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3980240607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3888181591' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1094418802' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2207188954' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27464 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18780 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:56 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:56 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 10:30:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478578613' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27488 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:56 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18825 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.28357 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.18750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.28372 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.27458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.28384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.18780 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/988730199' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/478578613' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1923095157' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 10:30:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453675055' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:57.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:57 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-0-xkdepb[73841]: ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: [prometheus INFO cherrypy.access.139980104289152] ::ffff:192.168.122.100 - - [10/Oct/2025:10:30:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 nova_compute[261329]: 2025-10-10 10:30:57.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 10:30:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/477626751' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:30:57 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053950729' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18876 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28471 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:58.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622925211' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.18798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.18807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.27488 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.28420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.18825 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3022188802' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3453675055' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/477626751' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2053950729' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1478406306' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1622925211' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3424359302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18897 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 nova_compute[261329]: 2025-10-10 10:30:58.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/102737578' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:30:58 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:30:58 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:30:58 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:58.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:30:58 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:58 compute-0 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-alertmanager-compute-0[103214]: ts=2025-10-10T10:30:58.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 10 10:30:58 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 10:30:58 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691596010' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.18858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.18876 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.28471 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.18897 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/102737578' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1807734272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1691596010' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4055382231' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18954 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:59 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:59 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 10:30:59 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203338503' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:30:59 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27602 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:31:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:31:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:00 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28552 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 10 10:31:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525258780' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.27557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.18954 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1183362410' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1510796144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/203338503' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/1997365182' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:00 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/525258780' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:00 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:31:00 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:00 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:31:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:00 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 10 10:31:00 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605429018' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 10 10:31:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780799251' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 10:31:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2314951990' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.27602 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.28552 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1908662771' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/1605429018' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/3635287387' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/2775988440' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/2780799251' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/2314951990' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3838984059' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 10 10:31:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.18996 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28579 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-0 anacron[216710]: Job `cron.weekly' started
Oct 10 10:31:01 compute-0 ceph-mgr[73845]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:31:01 compute-0 anacron[216710]: Job `cron.weekly' terminated
Oct 10 10:31:01 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 10 10:31:01 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685684035' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27644 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:31:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.100 - anonymous [10/Oct/2025:10:31:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:02 compute-0 ceph-mon[73551]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mon[73551]: from='client.18996 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3786071962' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/685684035' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/444957168' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28597 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 10:31:02 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622754536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:02 compute-0 nova_compute[261329]: 2025-10-10 10:31:02.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:31:02 compute-0 radosgw[95218]: ====== starting new request req=0x7f96beba75d0 =====
Oct 10 10:31:02 compute-0 radosgw[95218]: ====== req done req=0x7f96beba75d0 op status=0 http_status=200 latency=0.001000033s ======
Oct 10 10:31:02 compute-0 radosgw[95218]: beast: 0x7f96beba75d0: 192.168.122.102 - anonymous [10/Oct/2025:10:31:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct 10 10:31:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.28603 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:02 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.19020 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 10 10:31:03 compute-0 ceph-mon[73551]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969988928' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.28579 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.27644 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.28597 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/3622754536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/3623178689' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.101:0/1826634635' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.102:0/4000583183' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mon[73551]: from='client.? 192.168.122.100:0/969988928' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:31:03 compute-0 nova_compute[261329]: 2025-10-10 10:31:03.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:31:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.27668 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-0 ceph-mgr[73845]: log_channel(audit) log [DBG] : from='client.19035 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
